Tuesday, March 24, 2015

Forgetting the Past or Not Even Caring Enough to Remember

Numerous individuals have recited various versions of a simple truth over the years, “Those who do not learn from history are doomed to repeat it.” However, despite the gravity and accuracy of these words it appears that few individuals are interested in heeding them. This behavior raises an interesting question: is this lack of consideration for the past driven by individuals themselves or the means in which history is documented?

The digital age has given rise to a new medium for recording history that brings its own advantages and disadvantages. The principal advantage of the widespread digitalization of culture and its associated events is the ease at which information can be recorded and stored both from an opportunity and direct resource cost. Most individuals can type faster than they can write, especially over long periods of time, increasing the efficiency at which information is recorded; also electronic formats eliminate the need to acquire and use vast reams of paper or an even more cumbersome recording medium.

Unfortunately the advantage in storage capacity and speed has also brought forth disadvantages. One important problem for the long-term documentation of history is the speed at which technology changes. For example various paper and other physical medium (stone, rock, etc.) records have lasted thousands of years, imparting valuable information about past human culture and society, whereas electronic resources are more unstable be it from simple data corruption/errors due to a misclick to the potential of an EMP or large solar flare. While there are strategies to enhance longevity like etched nickel sealed in argon, these options are far too expensive to justify for most data. Even natural deterioration is accelerated in digital storage mediums both direct, a flash drive or CD physically falling apart, or indirect, a particular medium falling out of fashion with public use and becoming obsolete. One thing paper will never be is obscure no matter the “predictive” musing of certain technophiles.

The problem of social viability is further complicated due to the number of different formats for various files. While it can be argued that competition in the marketplace is good, the field of information storage is not a field suited for widespread competition, especially when so many of the options offer no significant advantages from their “competitors”; i.e. what really is the difference between .jpeg or .png or .tiff in actual application terms? Even if a medium remains socially viable, data retrieval and acquisition can become difficult if the only authorized personnel to access the information dies and no one else has the necessary information to takeover access. Certainly hackers and various security services can be utilized to correct this problem, but such action takes time and money and may not always be available or successful.

Fortunately these problems are probably the easiest of the disadvantages associated with digital recording to manage. Simple standardization of video and picture formats reducing the myriad of options to one or two should address orphan formatting concerns though it is unclear when such a step will actually be executed. Proper diligence in updating and converting existing formats by consumers should address conversion issues. Software companies can also better manage conversion issues by adding backwards compatibility even if it costs a little extra to develop. Some believe that all of these problems are moot due to cloud storage, but these types of storage mediums are dubious solely because they do not have a track record for being reliable over even decades let alone centuries just look at all of the online data storage services that have gone out of business over the last decade.

A more imposing problem is that the ease and reduced workload involved in producing and recording information has marred the process of identifying what information is actually important versus simple fact-less/baseless opinion. In the past only individuals who were intelligent or incredibly passionate produced significant information on a topic because of the work involved. Of course information produced in the past was not immune from error or bias, but due to the effort required to produce the information for mass consumption it was not difficult to identify bias born from excess passion. However, now because it is easier to produce information for public consumption there is reason to suspect, largely because it is already happening, less diligent individuals will produce more error-prone information in addition to more information being produced in general. In fact in 2011 Digital Universe estimated that humanity had created 1.8 zeta-bytes of new data and that amount was expected to grow exponentially over the next decade.

Unfortunately while individuals marvel at the sheer storage capacity of digital systems the time humans have available to sort through this information remains ever fleeting. With the ever- present human ego and frequent inability to accept being wrong a vast majority of this produced information and “historical” record is significantly biased towards a particular viewpoint without care for accuracy. Too often humans in general accept knowledge found online as accurate, especially if it supports their personal viewpoint, so the increased propagation of information will make weeding out the accurate information from the inaccurate information even more difficult.

The ability to determine truth from desire to outright lie is further complicated by the action and position of formal education. Sadly while the amount of history continues to grow with every passing second most modern educational requirements for high school students in history rarely surpass the Vietnam War leaving most of the 1970s to the present not studied or even discussed. This oversight creates an inherent negative, for at best the history teachers, those who should be better equipped than students to instruct and deduce information accuracy about historical events, are not able to help students understand the truth and at worse the exclusion of this information may lead some students to deem that it is not important. Clearly such a conclusion is not correct for there have been many important historical events, both in the United States and globally, between 1975 and 2015.

The lack of importance assigned to modern history sends the message to society at large, especially those in power, to ignore the concerns of the public with regards to their actions and decision-making for once those events fade into the past the public and history itself will not be able to judge inappropriate action harshly because people will not regard remembering it as important. Sadly those who do remember may simply be labeled as “over-emotional” bias actors depending on their viewpoint. Overall such a ramification is troublesome solely because any increase in hubris by those in power will typically produce negative results for the masses for most people in power tend to believe that helping society hurts their short-term capitalization potential, thus there is little incentive to help society.

Some may raise the concern that there is not sufficient time to teach all of the existing history; there are more “important” things to do like administering aptitude tests. The best way to address this problem is to eliminate the instruction of overlapping material, which is typical of history education in school, where elementary, middle/junior and high school history frequently discusses the same events over and over again through “review” sessions. One possible strategy for eliminating this overlap would be to divide U.S. history in sections of schooling as discussed below.

Grade = Material

5th = Colonial Period (1600s)
6th-7th = Revolutionary and Constitutional Period (1700s)
8th = Early Nation Development, Civil War and Reconstruction (1800s)
9th-10th = World War I, Great Depression, World War II and Early Cold War (1900-1950s)
11th-12th = Korean War, Vietnam War and Modern History (1950s-Present)

For some high schools the above schedule may involve expanding U.S. history from a single semester to two semesters, which should not be a problem due to the importance of history. Also the way history is taught needs to change for in the digital era gone are the days of memorizing dates and names. Instead students should be instructed on the motivations and rationalities (and how justified they were) that drove the “decision-makers” of a given time to make the choices they made. Knowing that D-Day occurred on June 6th, 1944 is far less important than knowing what type of planning went into its execution and why such a strategy was viewed as necessary.

Overall history has been a somewhat difficult sell to the general public in large part under the criticism of “how does this help me in my life”, which has created a motivation to not even bother remembering. Such an exclamation is puzzling for history is rife with incredibly meaningful “what ifs” that not only enhance thought, but also provide opportunities to learn how to better judge a given situation increasing understanding of potential ramifications. While changes can be made to the methods of recording history, how history is taught and how the public perceives the importance of history, in the end each individual must do a better job of understanding the importance of history as well as learning from its examples otherwise history will truly repeat itself until the repetitive bad decisions of society finally results in a hastened end to human society itself.

Citations –

1. “Extracting Value from Chaos.” IDC iView. June 2011.

Wednesday, March 18, 2015

Improving Customer Knowledge on Health Insurance

One of the tenets of the Affordable Care Act (ACA) is that a consumer will lower healthcare costs by comparing and contrasting prices for both insurance and medical procedures spurring competition between these respective agencies. Unfortunately the strategy is marred by the fact that the current marketplace only focuses on insurance provider characteristics in a limited capacity (co-pay, out-of-pocket limits, deductibles, etc.) and there is no information on cost relationships between insurance companies and a given hospital. Also there is no meaningful existing marketplace that focuses on medical service providers (MSPs) where a customer can compare the costs of a MRI between hospital A 134 miles away from his home or hospital B 46 miles away from his home. There are numerous independent groups that attempt to produce a meaningful “shopping environment”, but despite these efforts there is limited overall information, there is a lack of universal regionality, and most customers are unaware that these sites even exist with the exception of a random annual story about them on a blog. Without the ability for healthcare consumers to identify the best medical service prices it is difficult to expect them to be intelligent consumers and aid in the reduction of healthcare costs.

One of the biggest obstacles to producing a more transparent medical pricing environment is the arrangements negotiated between various hospitals and insurance companies. These deals create medical service institutions that are “in-network” and “out-of-network”. Insurance companies cover “in-network” providers because they are able to produce a lower controlled product using their economies of scale versus their inability to do so with out-of-network providers. In theory one would think that insurance companies would value a transparent marketplace because it would force MSPs to compete against each other to acquire customers thereby lowering costs for the insurance industry. Clearly it is assumed that the insurance company would have a price ceiling for each type of service, but few MSPs would exceed this limit, if reasonable, because it would lead to a significant number of services rendered without proper financial redress, which would put them out of business. If more medical transparency would theoretically benefit insurance companies, why is there no push from insurance companies to produce such an environment?

Three immediate reasons jump to mind when attempting to explain resistance by both MSPs and insurance companies to more transparent pricing, which is representative of the free-market principles that these groups claim to support:

The first reason for opposing transparency can inherently be viewed as the most plausible where there is a highly complicated and competitive relationship between MSPs and insurance companies in which these agencies work together to ensure proper prices with a sufficient customer base so that both parties profit. In such a relationship if significant transparency is developed it will add a third major component to this relationship, the decisions and tendencies of potential customers. Without understanding the nuances of the negotiation and the economic obligations of both the insurance companies and MSPs the customer pool will make sub-optimal decisions that will result in inefficiencies, which will produce increased costs reducing profits and even possibly endangering certain businesses.

While there is some truth to the level of complexity associated with this relationship, the above philosophy flies in the face of the general tenets of capitalism. Never has any real capitalist argued that a potential customer pool should be divided among a group of businesses without genuine competition. Instead the mindset has always been for businesses to produce advantages in their produces/services that will attract customers and if they are not able to produce enough advantages then that business folds up shop.

Some could argue that because buying health insurance and having access to medical care is more important than buying a hamburger it cannot be judged by the same principles as regular commerce. Unfortunately for its proponents the validity of this idea appears quickly dismissed when recalling the ruthlessness and questionable tactics that insurance companies have engaged in to deny coverage to their customers on technicalities as well as the excessive charges most MSPs levy against their patients that are “negotiated away” by insurance agreements. If MSPs and insurance companies want the above structure of “secret balance” then they should become non-profit organizations, which would at least justify the above argument.

The second reason for opposing transparency would be concern about divulging trade secrets regarding how prices are negotiated. The “trade secrets” argument is old hat for corporations attempting to avoid transparency. In some cases it is actually a legitimate argument; however, in the case against medical transparency it is not valid because the idea of medical service transparency is simply the declaration of a single price for a given service, i.e. standard single knee replacement, along with a general quality rating from an independent auditor. There is no expectation to produce a methodology regarding how a particular price was produced. In addition it is inappropriate for either insurance companies or MSPs to suggest that by simply knowing the price for a given service that competitors receive a negotiating advantage. Even if they could receive an advantage then all parties would have the same advantages in an environment where all service prices are publicly available, thus there is no reason to be concerned about the revelation of trade secrets.

The third reason for opposing transparency is the most obvious and more than likely the correct one in that the insurance industry and MSPs in general are happy with the current system because they are able to make large amounts of profit and are uncertain if a new transparent and more competitive system would decrease or increase that profit. To better understand how this uncertainty arises one must study the potential changes that occur in a more transparent environment.

In a more transparent environment one of two possible scenarios will emerge between the MSPs and insurance companies. In the first scenario insurance companies will maintain their existing relationships with MSPs and simply be competing against other MSPs and their insurance provider relationships. Basically insurance companies will keep their provider “zone(s) of control”, but consumers will be able to better understand the economic benefits from moving between those zones to best meet their needs.

In the second scenario the new competitive environment may cause MSPs to “unbind” themselves from insurance companies eliminating some to most of the provider control and its associated power. Without provider relationships insurance companies would lose their “zone(s) of control” which could lead to a mass exodus of individuals from one insurance company to another. For example open competition between MSPs would disallow any guaranteed business due to these zones of control, thus MSPs would not be beholden to insurance companies, thus insurance companies would have to compete for business without guarantees. Clearly this second scenario is much more dangerous to the profitability of both insurance companies and even MSPs because they would have to compete as well, just on a lesser extent.

On a political level both Republicans and Democrats should accept and support increasing transparency regarding medical procedures. Republicans should support such a measure because the existing lack of transparency is anti-American and anti-capitalistic as it restricts choice and freedom of individual consumers along with increasing distortion in free markets. Democrats should support such a measure because it will lower government costs and could reduce income inequality by reducing individual costs through a reduced price. Medical care is typically a fixed cost, thus it weights more on poor individuals versus rich individuals.

On a public affairs level almost all individuals should support increased transparency of medical procedures. The first obvious reason for this support would be the reduced prices for medical care that would accompany increased competition. The second, less obvious, reason for support would be the ability to better prepare for future medical care. One of the biggest problems with the current system of care is that most of the focus is on elective or chronic procedures versus acute procedures. Basically there is little shopping when someone believes that they are in urgent need for medical care. In this situation a person can become justifiably emotional and scared reducing the ability to behave like a rational actor when it comes to procuring competitive medical services. However, in a transparent environment individuals will be able to plan ahead of time to determine what hospitals to attend if procedure A is needed versus procedure B eliminating the need to decide on the spot.

Unfortunately while increased medical transparency should have significant government support from both major political parties as well as widespread public support, any Federal law demanding significant transparency requirements from these institutions does not appear on the horizon and for reasons discussed above one should not expect insurance companies and MSPs to become significantly more transparent on their own. The small collective of state transparency laws are a positive step, but should not be expected to significantly lower national healthcare costs.

For example when discussing state required transparency, in 2014 Catalyst for Payment Reform and Health Care Incentives Improvement Institute judged that only Colorado, Massachusetts, Maryland, Maine, New Hampshire, Virginia, and Vermont had some form of sufficient law(s) requiring appropriate and useful price reporting in effort to support transparency. However, among these states only Massachusetts and Maine had suitable and consistently operating websites to host pricing information allowing consumers ease of access to the information and the ability to effectively utilize it in order to make informed healthcare decisions.1 Despite this deficiency in overall transparency, there is an important step that insurance companies can take to increase transparency that should not threatened any real profitability and not require state or Federal action, changing the format of how patients are informed of how their medical costs are covered after a procedure.

The breakdown of what medical procedures were performed, their costs and who/what is responsible for what payments are commonly detailed in an “Explanation of Benefits” (EOB) form. The biggest problem with the generic EOB form is ironically a lack of explanation. This lack of explanation is largely because the EOB is basically a form letter to the patient with various numbers and procedure codes thrown on a piece of paper. There is no unique explanation associated with the patient’s personal experience and the procedures executed. Initially it would be unreasonable to expect insurance companies to perform unique detailed explanations and evaluations for all successful claims. However, it is not unreasonable to expect insurance companies to produce a more clear and transparent document.

The core of this lack of transparency in the EOB is that insurance companies and even MSPs force too much onus upon the patient understanding both the intricate elements of his/her insurance policy and having the ability to use that understanding to interpret the EOB. This interpretation is made more difficult due to the lack of qualitative information in the EOB. Insurance companies could make it much easier on patients if they simply tied the insurance policy to the EOB and then used both qualitative and quantitative information to demonstrate step-by-step with words, not just numbers, how the policy was used to pay for and not pay for certain procedures. For example instead of simply stating that “sum x is to be paid by the patient due to the maximum coverage reached due to the condition of the plan for this service” the EOB should document the existing coverage value and how that coverage value was utilized to covered the applied care.

Such a change in strategy should not be difficult because insurance companies already use policy information to create the EOB for individual patients, thus the only real change would be the addition of qualitative information. For those who think such a change would be too difficult, cumbersome or expensive, the problem with this objection is that individualized plans do not really exist, thus there is only a small finite amount that must be addressed. For example if purchasing insurance was likened to purchasing a meal from McDonalds the customer would only have the ability to purchase a certain specific number of pre-assembled meals (i.e. value meals) instead of build their own meal experience from individual items (i.e. a la carte). There are no significant a la carte insurance plans, thus the overall cost increases for making these changes are minimal. In addition to the step-by-step analysis, which would use generic statements relative to co-pay and co-insurance, a more expansive EOB should include a small glossary to explain specific terms.

Some could argue that such a change would inconvenience insurance companies and it is the responsibility of the policyholder to know the extent and limits of his/her policy. In addition the Internet provides resources to “guide” patients through the general meanings of an EOB. On its face this argument is insufficient for multiple reasons. First, despite what some people want to believe not every individual has access to the Internet, thus looking online for assistance is not universally applicable. Second, arguing against the above changes to an EOB is an argument against efficiency and productivity. What makes more sense: insurance companies spending a single capital investment that would be less than 1% of total yearly profit to make their EOBs more useful friendly and easier to understand or millions of people spending two to six hours attempting to understand their EOB in its current form without a guarantee that they will? Suggesting that the latter makes more sense should only be answered with a silent and sad horizontal shaking of the head.

There is a big difference between an individual thinking he knows what his medical insurance covers and actually seeing what it covers. A more detailed and consumer-friendly EOB will help individuals better understand the actual applications of their medical insurance coverage and will increase transparency and consumer choice by producing better-informed consumers. It would be ideal if the Federal government would actually involve itself on this issue by producing legislation that would create a standardized EOB format instead of relying on companies to do it themselves or states producing individualized legislations that may not be uniform. Overall if one of the major goals of legislation like the ACA is to reduce medical costs then transparency is a key element to increasing consumer choice and lowering costs. While a truly transparent system seen in how most consumer goods and services are purchased may still be a while away, producing a more detailed EOB is an easy and straightforward means to producing more informed consumers and possible lowered medical costs.

1. Delbanco, S, Brantes, F, et, Al. “Report Card on State Price Transparency Laws.” Catalyst for Payment Reform and Health Care Incentives Improvement Institute. Mar 2014.

Wednesday, February 18, 2015

The Decline of Marriage

Currently there is either a crisis in marriage or simply a course correction. It is no secret that the percentage of men and women in the United States who are currently married has decreased steadily and significantly from 1970 to now. While there has been a significant increase in divorce since 1970, most of the decrease in marriage rates has come from individuals not choosing to marry at all. In addition to the general overall drop decreases in marriage rates have differed based on income/assets shown below.

Figure 1: Marriage rates for men between the ages of 30-50 by income bracket from 1970 to 2010 (1)

Females have seen a similar pattern with high-income earners only experiencing a small drop in their marriage rate, while working class women have seen a drop of at least 15%.1 One question to help characterize this trend is: are marriage rates actually in trouble (i.e. they will continue to fall in the future) or are marriage rates naturally dropping from their seemingly unrealistic levels in the 1950s and 1960s and will simply stabilize at a dynamic equilibrium point in the near future?

While this question cannot be directly answered at the moment it is important to determine why marriage rates have fallen in the manner they have over the last half-century to better understand which of the above answers is more probable. In the past and present three factors have largely driven the desire to marry or not to marry: cultural, economic and psychological. How these factors have changed with time should produce sufficient and effective base to address the above question.

One reason a drop in marriage rates should not be surprising is a more liberal societal cultural shift regarding an absence of inter-gender relationships. In 1950s and 60s individuals who elected to remain single were typically thought of as weird, strange, “players” and/or inferior because they were not able to attract a spouse. In modern times such generalizations are made much less often and instead remaining single is commonly regarded as a valid lifestyle choice. This change has made individuals more free from cultural pressures to marry in fear of being passively ostracized from society.

Changing attitudes with regards to remaining single was not the only change for attitudes regarding women in general have also significantly changed. In the past there was typically an underlying understanding after a marriage that the male would have the job and earn the money (i.e. be the breadwinner) and the female would stay at home and manage the domestic affairs of the family: cleaning the house, raising the children, etc. This structure made it imperative that women find husbands that could support them for their prospects of finding employment to support themselves were limited, even with the gains made from their work during WWII reducing prevalent stereotypes that they were unable to perform certain jobs. Over time the significant and continuous increase in participation by women in the labor force has changed this “understanding”, in the eyes of some even rendered it obsolete. Therefore, for a number of women marriage was no longer the principal method in which one could find economic support.

In addition to the cultural shift in accepting women into the workforce, changes in social norms and the legal system made divorce less stigmatizing, but more difficult to execute due to increased legal complexities. These changes in the execution and structure of divorce proceedings are believed to significantly influence the desire of single individuals to not marry. Interestingly it could be argued that due to the legal and emotional complexities of divorce that for a number of individuals a divorce is more emotionally and psychologically taxing than a standard termination of an existing relationship (i.e. breakup), both in magnitude and duration.

Another element that is amplifying the negative associations of divorce is the cultural shift concerning co-habitation. In the 1950s and 1960s the chief factor that limited the amount of co-habitations was not that it was shunned by general society (although it was), but that people did not consider it a viable option in contrast to marriage. Therefore, even if someone had concerns about the negative elements associated with a potential divorce there were typically only two ways a relationship could resolve: breakup or marriage. Now co-habitation has become a legitimate alternative, which could apply greater emphasis on the negative elements of divorce. While a number of individuals do co-habitat before marriage, co-habitation is not the catalyst for marriage that some claim.

The chief disadvantage of marriage relative to co-habitation is the ease at which the latter can be ended. Both entrance into and exit from marriage have significant regulatory hurdles where as co-habitation simply involves moving some material possessions into and if necessary later out of a physical location. Entering into a marriage has some hurdles that can complicate things and one could argue that these hurdles provide a “weeding out” element where non-serious applicants will typically fall by the wayside. However, divorce is the real problem for even when a divorce is amicable it takes weeks, if not months, to fully resolve the separation.

An interesting psychological aspect of the fear of divorce is a number of individuals view divorce as almost an inevitable occurrence, that the marriage is destined to fail. It is strange that individuals would think in such a manner. How often do most people envision taking an action where the initial mindset is failure? The negative ramifications of divorce are only relevant if one views the probability of its occurrence as considerable. Perhaps such a mindset is reflective of one’s general standing in existence for high social status individuals (well-off college graduates) have not seen a significant drop in marriage rate versus those with less in their lives. Basically it can be argued that the further down the economic ladder one is the higher the probability that his/her life has had significant failure, thus the potential of a marriage is viewed as having a higher likelihood of failure versus someone who has had more success in life.

The Affordable Care Act (ACA) could also prove a detriment to marriage as one of the few remaining tangible benefits acquired from marriage over co-habitation is that spouses can share health insurance meaning that one person who could not afford or be eligible for health insurance could be covered under their spouse’s policy. However, the ACA forces insurance companies to cover all individuals regardless of circumstance and allows for states or the Federal government to provide subsidies, which are much more viable to singles versus married individual, to ease costs, thereby significantly damaging the shared healthcare advantage of marriage. Whether or not this will influence marriage rates is unclear, though it is theoretically plausible that there should be little change because in modern times the acquisition of health insurance was not a significant motivation for marriage.

While there is no argument that individuals who marry have better physical and mental health outcomes than individuals who remain single, there is less certainty regarding the differences in health outcomes between married and co-habitating individuals. However, a majority of the research appears to come down on the side of marriage regarding the better health outcomes in part because marriage produces higher probabilities for quality relationships, but there does not appear to be a decisive difference between the two.2-4

There are two major rationalities for this result. First, individuals who marry recognize the strong loving bond they have with their significant other and maintaining a quality relationship is simply easier due to these positive connections. In essence these individuals gain almost a status-based ego boost from the marriage, viewing it as the “ultimate level” in relationship status. Second, the “fear” of divorce may actually be beneficial on some level as the difficulties surrounding divorce could force individuals to apply more effort and care in working through problems in the relationship improving psychological well-being versus co-habitation where escape is so easily achieved that a small problem could derail the relationship or is ignored and allowed to fester.

As mentioned above the advancement of women in the workforce has reduced some of the more questionable rationalities for marriage both culturally and economically. From an economic perspective the ability of women to support themselves financially has had a negative impact on men from a standpoint of their general marriage prospects. Women can now be more selective regarding whom they want to marry rather than focusing solely on “landing a man” because they need someone to support them.

This new selection freedom for women may have moved marriage from a quasi-necessity to a luxury. Unfortunately like most luxuries this produces an “arms race” mentality among many of the competitors (men) to demonstrate the value of a relationship. As women now have more freedom in selecting a marriage partner, males have to do more to make themselves more attractive, typically along the lines of having/earning money. Therefore, it can be argued that one of the biggest influencing factors on marriage rates is income inequality; i.e. the less money one has the less likely they are to get married because they are not an attractive candidate. There is sufficient evidence that supports the influence of income inequality as marriage rates have fallen much faster among poor and middle class individuals than among rich individuals.1,5,6

A number of conservative voices have lamented that one explanation for the drop in marriage rates is the penalties associated with marriage in the tax code. Originally the policies that have produced these penalties were actually boons to married couples, but with the cultural shift that has afforded women more workplace opportunities, these boons have had tendencies to become busts. Of all of the financial elements affecting marriage in the tax code there are two main elements that should have the greatest influence on marriage rates: joint filing, including association with welfare benefits, and the Social Security spousal benefit.

In modern times joint filing has become a poor motivator for marriage. First of all a joint filing is significantly more complicated than individual filing creating undue stress regarding potential benefits and detriments from the rate brackets and income divisions within the couple. Unfortunately most of the time a joint filing in a two-occupation household forces the married couple to pay more in taxes. Joint filing was designed around the traditional idea of a marriage where one individual (typically the male) works to support the rest of the family financially and the female works to support the family domestically, thus because the female does not get paid the rules associated with joint filing typically produced a lower tax rate.

Overall depending on the income disparity there are two possible outcomes for a married couple where both individuals have jobs: 1) if the individuals are in different taxable income brackets the one in the higher bracket will typically pay less and the one in the lower bracket will typically pay more due to income averaging; 2) if the individuals are in the same taxable income bracket both typically pay more. The possibility for greater payment occurs because while income is summed, the boundaries defining the various tax brackets, after the first two brackets (10% and 15%), are not proportionally maintained versus their single boundary counterparts. For example in a tax filing for a single individual the boundaries defining a 25% rate are $36,901 to $89,350 whereas in a joint filing the boundaries are $73,801 to $148,850, note how $29,850 dollars has been removed from the upper boundary.

Based on the above rules while there may be a small benefit to certain individuals who marry someone below their income brackets, in practice only a minority of marriages involve crossing income brackets to the point where this element is relevant; therefore, a majority of individuals who get married will suffer increased taxes. In fact the individuals with the highest probability of receiving a tax benefit from joint filing are those who need it the least, the rich. However, the most problematic element of direct tax bracket assignment of joint filing affects middle class marriage because those individuals have overall less money to lose than rich individuals when suffering the penalty.

The direct income summation tax penalty can influence the marriage potential of all parties; however, this summation has a greater indirect negative influence on the poor because of its association with the welfare system. Understandably the welfare system has an income ceiling one must be below in order to claim benefits, but when two welfare recipients near this ceiling marry the income summation disqualifies both from receiving further benefits. Therefore, this structure of how one qualifies for welfare benefits when married produces another economic obstacle to motivating poorer individuals to marry versus co-habitation.

The spousal benefit in Social Security is the second problem in the economics of the modern marriage. The original design envisioned a traditional marriage where the individual with the job (typically the male) who paid into Social Security would receive standard benefits associated with that payment whereas the individual without the job (typically the female) would receive a benefit approximately one-half the size based on marriage. Overall this “traditional couple design” would result in a retirement benefit of approximately 150% the benefit of a single individual. The purpose of the design was to act as insurance to protect the non-working spouse against the loss of the worker’s wage. Note that this benefit can apply to divorced women who were married for a certain period of time.

However, once again the design was meant for a single worker marriage. Working wives pay full Social Security payroll taxes, but the benefits derived from these payments compete with this spousal benefit, i.e. they only get to claim the one of higher value. Since most males make more money than their wives and typically work longer (although this latter aspect may be changing) the spousal benefit will frequently be larger. Therefore, these working wives collect the same Social Security benefit they would have received had they not worked at all, thus all of the payroll taxes paid provide no future benefit instead it is simply lost income to the government. Overall while there is some loss of funds due to the lack of benefit from the payroll tax for the most part the detriment is marginal because working spouses still significantly benefit over non-working spouses due to the wages earned from their employment.

One simple way of dealing with this problem is eliminating the spousal benefit, but taking such action would leave most women worse off because the overall benefit from their payroll tax benefit is less than the spousal benefit; women who function in the traditional homemaker role in marriage would be hurt even more. However, some would argue that “traditional” marriages have become rare due to both the increased number of working women and the decreased number of marriages, thus any detriment to this element is marginal. Another point of contention is any change to the spousal benefit must have a phase-out period because of individuals who still rely on it. Overall between the two, modernization of joint filing to support middle class individuals should take precedence over changing the spousal benefit.

Another economic element that could influence marriage that has not garnered much attention is income stability or volatility. While somewhat crude, marriage can be thought of as an investment and anyone with any business acumen will agree that uncertainty is the most dangerous element in investing. High rates of income volatility produce significant levels of uncertainty regarding the prospects for financial stability in a high consumption commitment investment like marriage. Some research has identified that rising income volatility could explain a significant portion (one-third) of the decline in marriage.7

The third element that influences marriage rates is the interpretation of the inter-gender relationship along with how it begins and evolves. In the 1950s most courtships proceeded similarly starting with the male asking the female out on a first date; after numerous additional dates some couples engaged in their first act of sexual intercourse. If couples did engage in sexual intercourse it was rarely talked about, especially to parents. Finally after a stable and lengthy period of time together couples identified whether or not they wanted to get married.

Modern times has developed a more “hook-up” mentality where individuals who are not even in a formal relationship or even on a date will get together for sexual intercourse and then never significantly interact with each other again. Some conservative groups have claimed that greater access to pornography has reduced marriage rates, but to make this argument one would have to demonstrate that a significant motivator for marriage was the consistent ability to have sexual intercourse where this access was not otherwise available. This argument is defeated by the fact that sexual intercourse between non-married individuals has become rather commonplace due to the change in how people view relationships and is more easily engaged in than the longer courtship period associated with marriage.

Unfortunately for pro-marriage proponents it appears difficult to reverse the more lax attitudes of modern youth regarding sex and love. Some would argue that while when these individuals are younger this attitude may complicate creating positive future romantic relationships, the increased sexual freedom produces better overall individuals when they do decide to get married. Whether or not this point is accurate remains unknown, but appears to be a reach due to the existing divorce rate.

Overall of the three elements that heavily influence marriage rates, both psychological attitudes towards sex and love and most cultural elements appear too difficult to change. The casual attitudes towards sex are too ubiquitous whereas reverting the cultural gains made by women would be immoral and eliminating the acceptance of co-habitation appears irrational as well as incredibly improbable. Therefore, the chief element that remains available for significant positive action to increase marriage rates is economic influence as well as some more minor cultural elements.

One of the more bold and potentially effective ways to produce a better economic environment to improve marriage rates would be to neutralize the negative influence of income inequality and volatility through passing a guaranteed basic income (GBI). A GBI would make marriage more attractive to individuals by producing a decrease in economic volatility due to a constant stream of income that can be used to ensure acquisition of basic needs even in the face of hard economic times. In addition a GBI could lessen the negative impact of a divorce if the marriage does not work out reducing the stress associated with uncertain finances in the face of a potential divorce.

While a GBI would be a sweeping strategy for improving marriage rates, it is understandable that the magnitude of such a strategy would face strict opposition from powerful interests. Another strategy to improve marriage prospects would be to change how a married couples files jointly adding an additional filing option that would reduce the negative aspects of a joint filing in a two-income marriage, especially for middle class filers, yet allow traditional marriages to maintain their tax advantage.

From a psychological standpoint pro-marriage individuals or groups should focus on the positive elements of marriage over co-habitation like improved health outcomes and increased relationship stability. Also effort needs to be applied to neutralize the general negative malaise that has allowed divorce to govern the conversation of marriage outcome by attacking the negative psychology that entertains divorce as the more probable event to end a marriage. Finally one could simplify the divorce proceeding in general, which would reduce the amount of resources that individuals have to devote to terminating marriages that are not salvageable thereby reducing the level of fear associated with divorce. Divorce simplification could draw criticism from some parties who worry that too much simplification will result in an increase in divorces, not a genuine means to improve livelihood, but as a crutch or escape valve when a marriage gets a little rocky.

Overall based on how the three governing factors that influence marriage rates have changed it is difficult to assume that the recent change in marriage rates is simply a “course-correction”. Exacerbating the problem is the fact that marriage rates are further threatened by not addressing the more pressing economic factors that produce obstacles to marriage. In addition it is important for society to focus on the positive elements associated with marriage like the health and stability benefits versus the negative ones like the probability of divorce. If these economic and psychological factors are not addressed then it stands to reason that marriage rates will continue to drop among non-rich individuals eventually characterizing marriage as an event that occurs more for the wealthy than non-wealthy.


1. Greenstone, M, and Looney, A. “The marriage gap: the impact of economic and technological change on marriage rates.” Brookings Institution. 2012.

2. Robles, T, et Al. “Marital quality and health: A meta-analytic review.” Psychological bulletin. 2014. 140(1):140.

3. Thoits, P. “Mechanisms linking social ties and support to physical and mental health.” Journal of Health and Social Behavior. 2011. 52(2):145-161.

4. Musick, K, and Bumpass, L. “Re-Examining the Case for Marriage: Union Formation and Changes in Well-Being.” Journal of Marriage and Family. 2011.

5. Schaller, J. “For richer, if not for poorer? Marriage and divorce over the business cycle.” J. Popul. Econ. 2013. 26:1007-1033.

6. Martin, S, Astone, N-M, Peters, E. “Fewer marriages, more divergence: marriage projections for millennials to age 40.” The Urban Institute. 2014.

7. Santos, C, and Weis, D. “Why not settle down already? A quantitative analysis of the delay in marriage.” 2012.

Tuesday, February 10, 2015

Outside Perspectives on Black Leadership

While one can raise concerns about numerous issues regarding the perception of the black community in the United States in general, the chief product of this concern stems from a failure of leadership within it. This concern regarding the quality of black leadership does not appear exclusive to one outside the black community for numerous black individuals believe there is a dearth of leadership. For example various surveys of the black community have produced two significant and troubling results: 1) a large number (30-40%) of those polled do not believe anyone of notoriety or power fights for their interests; 2) the most common “leaders” are those with a sufficient level of national notoriety, but limited political power or recent accomplishment like Al Sharpton or Jesse Jackson. This accomplishment deficit may be the chief reason why most blacks do not feel empowered or effectively represented by their current leadership. Unfortunately despite these feelings little is done to address the issue of poor leadership.

The problems that plague the black community can be categorized two ways: intraracial (can be solved exclusively through action by the black community) or interracial (requires cooperation between multiple parties on a public stage, mostly likely political, to produce an effective solution).

From the outside looking in the most important intraracial problems appear to be:

- Lack of stable families as a larger number of black children are raised in single parent households, typically by a mother, than any other race/ethnicity; it must be noted that one point individuals frequently fail to consider is that grandmothers are commonly involved in childrearing as well in these situations so the mothers are not entirely alone. However, there still remains the lack of a positive male figure in the lives of many more black children than other races.

- An artificial accomplishment ceiling that is created among members of the black population due to a limited focus on education. The lack of focus on education produces an inherent ceiling on what an average black individual is able to achieve thereby reducing the ability of black individuals to acquire wealth and influence as well as reducing the probability for happiness and overall fulfillment with life possibly increasing the probability of a nihilistic attitude and/or behavior.

- Lack of financial knowledge and planning. One of the reasons that black individuals have trouble building wealth is that they have less knowledge about how to strategically invest money along with creating and adhering to a budget. A secondary issue in this problem is the concern that without significant knowledge about financial discipline there is a higher probability than normal of over-consumption. Unfortunately the most public rich black figures, entertainers especially musicians, encourage this over-consumptive behavior through flaunting high value, yet frivolous objects like jewelry.

From the outside looking in the most important interracial problems appear to be:

- The disproportionate level of poverty that afflicts the black community versus all other races and ethnicities. While some Republicans and Libertarians like to blame the government for incentivizing blacks not to work due to welfare and other elements of the “Great Society”, this belief is false. The advancement of poverty in the black community is a more complicated issue than “Government motivated laziness” which has almost nothing to do with the reality at all and ironically is simply a lazy excuse to “hand wave” the problem.

- The relationship with the criminal justice system commonly places a disproportional amount of blacks behind bars. However, a vast majority of these incarcerations are legitimate defeating the idea that the criminal justice system is generally racist. Overall there is a difficult relationship between the law and the black community partially based on history and partially based on a lack of psychological evolution by both parties.

- Social reclusiveness is a general problem for members of the black community tend to largely prefer interacting with only each other. While the preference for interacting with a member of one’s own racial/ethnic community is generally universal, the lack of political and influential power possessed by the black community demands more assertive interaction with other races to advance solutions to interracial problems that afflict their communities.

Why does black leadership fail to drive the positive advancement of solutions to the above problems affecting their community? There are three immediate rationalities: 1) Leadership actually tries to solve problems, but are not able to do so due to presently insurmountable obstacles; 2) Leadership wants to try to solve problems, but are aware of insurmountable obstacles so they do not even try until conditions become more favorable; 3) Leadership wants to solve existing problems, but does not want to attempt to solve problems because they could fail and failure would result in lost confidence by the black community in their leadership resulting in lost influence and power; therefore, they find it easier to blame other parties for the problems in an attempt to maintain their influence.

While one likes to believe that the first option is the correct one, if it is then black leadership has done a very poor job of challenging those obstacles and demonstrating that effort to the general public, including their own communities. For example the best way to solve a problem when facing obstacles is to produce a very specific solution pathway that demonstrates why those obstacles should be eliminated. Typically this is done through demonstrating that applying the hypothesized solution will be in the interest of the majority and foster a stronger and more efficient community. Major sources of black leadership do not commonly produce these types of solutions, whether it is spoken or written. This lack of preparation and specificity implies ignorance, laziness and/or a non-genuine effort to solve problems, none of which are attractive in a leader.

The second option is not desirable for individuals in leadership positions that refuse to undertake the challenge of solving problems, or even produce the necessary preparation and strategies, should not be in a leadership position. Waiting is only a valid strategy when influencing factors on a given problem are dynamic, which could make it difficult to determine how one should attack the problem. A vast majority of individuals, regardless of race, would argue that most of problems in the black community have been relatively static, thus all relevant elements affecting the problem are generally known.

Unfortunately it appears that the third option is most likely correct, current black leadership is not really interested in and/or able to formulate strategies to solve problems. Instead these individuals are charismatic rabble-rousers that are able to effectively communicate their outrage regarding the standing of black people in society and the continuation of these problems, but because they have no interest in actually fighting for solutions, the publicity gained from their outrage serves no purpose. Their behavior seems akin to that of Homer Simpson’s slogan when running for Springfield Sanitation Commissioner: “Can’t someone else do it?”.

A similar analogy for this situation is seen in most pharmaceutical companies and their lack of interest in producing new antibiotics. It is difficult to make an antibiotic profitable because while it costs a large amount of money, its application cures a condition, thus typically only a singe dose is taken per infection outbreak. Pharmaceutical companies are more interested in producing drugs that manage, but not cure, chronic conditions so numerous prescriptions need to be filled over the course of a patient’s lifetime, thus producing billions of dollars for the company over decades of use. This principle also appears to be at work by current black leaders in that they are not interested in solving problems in the black community, but they frequently remind individuals that those problems still exist demonstrating that “they care”.

Some would argue that this analogy is not appropriate, that black leaders legitimately fight for their communities. Justification for this position would in part involve the numerous documents that have been produced that supposedly provide a solution. However, the problem with this argument is that most of these solutions lack the necessary specific details in their application to be viewed as serious attempts. Most of these documents do a good job of highlight why the problem is a problem, but the solution is often limited to “make it illegal” or “fix it” lacking real details or commitment by leadership to accomplish what needs to be accomplished. Similar behavior is seen on the bevy of cable news/opinion shows.

Above it was suggested that a possible explanation for the lack of fight by black leaders on black issues is that the fear of failure exceeds the reality of the status quo. So why is failure worse than not even attempting to fight? One of the major reasons may be psychological. Unfortunately the black community places a significant level of dependence on racism as an excuse for its failures. Note that anyone who suggests that racism no longer exists is a fool; however, while it is the 21st century many in the black community act as if general Jim Crow laws are still active and those laws are what holds the black community back from achieving parity with other races.

Basically whenever something does not go the way of a black person a common rationality is racism is to blame for the failure, nothing else. This belief could provide motive for black leaders to not assertively move forward in attempting to achieve solutions for if they fail then they must recognize their own shortcomings. However, by not moving forward they can continue to cite racism as the scapegoat for the lack of progress and maintain their power and the benefits that come with that power.

Interestingly this lack of assertiveness is perplexing because while the fear of losing their power as leaders of the black community in light of failure is understandable, it is not a viable fear in the current practical reality. The reason any fear is unreasonable is directly tied to another problem in the black community, the lack of young leaders. Most of these “public” black leaders have few individuals legitimately vying for their positions, especially young “up and comers” so there appears to be little consequence associated with failure.

Therefore, without the validity of this fear it can be reasoned that the current black leadership either does not have the fortitude to actually advance solutions to existing black problems or does not have the intellect/creativeness to produce the solutions. For the black population to address this issue they must focus on producing new leaders to challenge the “old guard” who refuse or are unable to further advance the positive evolution of black society. Unfortunately the development of black leaders seems constrained by three major problems.

The first problem is economic abandonment, not necessarily by society, but within the black community itself. While the black community likes to put on the air that it is one big unified family, an almost “us against the world” type mentality economic abandonment seems rather common. One of the most pressing problems in the black community is the disproportionate level of poverty afflicting black individuals compared to other races and ethnicities. However, there is somewhat of a divide between a number of middle-class blacks and poor blacks and definitely a divide between rich blacks and poor blacks. For most non-poor blacks there almost appears a fear that after “clawing” their way to wealth that interacting with the poor will somehow pull them back down into that environment. Thus, an “out-of-sight, out-of-mind” apathy develops towards the poverty problems of their less wealthy brethren.

Whether or not this above reasoning is accurate, in actual practice very few rich black individuals take the time to fight for those with less money. Once in a while there will be some media coverage of a donation that entertainer x gives to a local charity, but unless there is a camera around and a good reason there is little consistent interaction between rich and poor blacks. Of course this behavior is common to all races and ethnicities, but with the poverty problems and “brother/sister” perceptions that the black community exudes the lack of interaction among black people on economic grounds seems morally worse.

This lack of interaction is problematic because these rich individuals have significantly more power and influence than their poor counterparts on the national stage and can produce much higher success probabilities when advancing potential solutions to the problems afflicting the general black community. Also most rich people, regardless of race, have a lot of free time because either their wealth creates a vast majority of their future wealth (through stocks and other investments), so they do not have to work at all or their occupation is one of “short time high value” where there is significant downtime within their job structure (i.e. an entertainer that works on a movie for three months and then can choose to not have another television or movie engagement for three to seven months).

Individuals with significant influence born from wealth who are typically charismatic with large periods of free time can produce conditions that would increase the probability of applying practical solutions to certain problems. However, to produce change one must be willing to get down in the trenches and work through problems and their possible solutions on a consistent level outside of the glam and flash. Members of professional sports teams appear to frequently attempt to conduct positive charitable work, but as a group this commitment is rather scattershot with a minority applying most of the effort. Sadly few wealthy blacks are actually willing to step into those trenches to help deal with the artery wounds, like poverty, in the black community beyond a very public single-application only bandage.

The second problem for producing leaders ties back into the issue of racism-victimization and its effect on psychological development in the black community. From an outside perspective the more vocal elements of the black community appear to value “street credibility” over intelligence. In fact intelligent blacks are frequently shunned in black society commonly labeled as “not black enough” as if there is some form of racial ceiling on black intelligence.

The sad state of affairs is that if racism is truly the main element behind black failure in society then intelligent blacks are critical pieces to overcoming this racism. These individuals have a higher probability of producing the credibility and wealth to start businesses or develop strategies that can provide advancement to qualified individuals producing more opportunities for black individuals in effort to neutralize racism. However, the inherent aversion towards intelligent blacks in the black community, possibly because they associate with white individuals, ostracizes these individuals making their future contributions to black society less probable. The irony could be that a form of bias may be the biggest problem affecting the black community, but it is not racism towards blacks by other races, but their own bias towards those blacks with intelligence.

This negative view is somewhat perplexing, but not without rationality. Some blacks still have the racist belief that working with white people or even among white people is somehow a betrayal of their fellow blacks, which is obviously a shortsighted and ridiculous way of thinking. Also the idea that black people can accomplish anything on a national scale without working with other races is irrational because blacks only make up approximately 12-13% of the U.S. population. Little can be done in a democracy with only 12-13% of the voting power. For example Barrack Obama was not elected President of the United States on the exclusive strength of the black vote.

However, belief in this negative association may be catalyzed by the aforementioned behavior by rich black individuals. The general apathy of rich blacks towards the black community may be seen as a betrayal born from their association with rich non-black individuals. Unfortunately this characterization is not accurate or appropriate; any cultural abandonment is the fault of the rich black individual not the new environment in which he/she chooses to live.

The biggest loss from alienating intelligent black men and women is that such behavior increases the probability of damaging the bond between those individuals and the community. If this bond is damaged it reduces the probability that these individuals will become leaders and fight for the community instead of simply producing personal success and then withdrawing from public advocacy. Recall from above that getting into the trenches is what produces change, not showing up for a trendy protest march once a year.

Losing this bond to the black community in general also reduces the probability that black leaders have a connection to both the small-scale and large-scale problems of their brethren. Some significant elements of this lost connection are already seen in modern black leadership for while one can suggest that a leader like Al Sharpton cares about economically poorer blacks, he is not poor himself thus does not really understand the daily realities for a poor black person because he does not experience them. A similar analogy can be made for a football sportscaster that has never played football for while intricate study can produce meaningful analytical knowledge of football the lack of actual experience leaves certain holes in that knowledge born from a lack of understanding specific nuances of the game. Additionally one cannot suggest understanding based on one being poor in the 70s, but not now, for being poor in 2010s is sufficiently different physically and psychologically.

Another potential sub-problem encapsulated by both the first and second problems is that under the current version of capitalism practiced in the United States there is significant divergence between the priorities of the rich and the poor, i.e. policies that help the poor limit the amount of money that can be made by the rich. Therefore, a strong social bond will increase the probability that rich blacks sacrifice their maximum wealth potential for the interests of their fellow poorer blacks. Whether or not the current national black leadership is willing to make this sacrifice is unclear.

The third problem in developing leaders is the negative relationship that blacks have with the criminal justice system. It is difficult to cultivate quality leaders when a significant portion of a demographic is incarcerated. Some are quick to use a disproportional incarceration rate to suggest that the criminal justice system is racist. However, this accusation is not founded on solid logic, which has been previously discussed on this blog. The reality is that both parties have marred the relationship, but in order for the relationship to improve one side must take the first meaningful and honest action.

This first step to resolve this problem is to develop a better relationship, so what limits the ability and motivation of the police to take the first step in improving the relationship with the black community? Unfortunately a legitimate rationality is safety. Regardless of how one wants to view the relationship between the black community and the criminal justice system the fact remains that a disproportionate amount of legitimate crime is committed by blacks, especially violent crime. Combine this reality with the general negative view that the black community has of police and it is reasonable to assume that a reduction of vigilance will increase the probability of bodily harm to police officers.

What limits the ability and motivation of the black community to take the first step in improving the relationship with the police? Unfortunately the answer appears to simply be distrust and pride. The outward impression generally expressed by the black community is that from cradle to grave individuals are instructed to not respect the police, not trust the police and not assist the police in their investigations. With this mindset one should not be surprised that the police are inherently wary and quicker to utilize violence, whether justified or not, against black suspects.

What stops black individuals from simply working with and respecting the police? Are they concerned that by respecting the police they make themselves somehow more of a target for “trumped up” or false criminal charges? That rationality makes little sense for cooperating with and respecting the police is not equivalent to rolling over and abandoning one’s rights or expectations for fair treatment. In fact it stands to reason that a positive relationship will improve the probability of not having false charges levied against one versus an antagonistic relationship.

One thing the black community needs to accept is that some of its members commit crimes. As mentioned earlier for a number of blacks, and some non-blacks, there seems to be this idea that the criminal justice system is grossly racist and a significant percentage of the blacks in jail are not guilty of the crimes they were convicted for, a belief that is not correct. Almost all police departments and the associated court systems for their jurisdictions are not racist.

Once this reality is accepted predominant black communities can create community action committees to work with police to ensure fair and equitable treatment for black suspects increasing the probability that those who are guilty receive appropriate sentences and those who are not guilty are not inappropriately convicted. These community action committees would also be another environment to cultivate black leadership for members would have to interact with both the black community and law enforcement acting as a unifying force to produce cooperation as well as appropriate and just action by both parties.

Furthermore an element that the black population must accept about the criminal justice system is that it still functions on evidence and logic. There are a number of situations where a non-black individual is involved in a potential criminal altercation with a black individual and a number of individuals in the black community expect a conviction and if one does not occur a racial conspiracy is viewed as the only valid reason for that failure. What must be realized is that there are times where insufficient evidence exists to produce a conviction and the rules of law must be followed regardless of what one emotionally wants.

One of the greatest elements that increases animosity between non-blacks and blacks regarding the criminal justice system is that the magnitude of public response by the black community to crimes committed against blacks appears to be dependent on the race of the assailant. For example black assailants commit the vast majority of violent crime against blacks. However, these criminal actions do not produce the types of wide-scale public protests by the black community that accompany violent action against blacks by non-black assailants, especially when the incident is a police shooting. Instead the black community tends to suggest that intraracial (black criminal black assailant) violence is dealt with through small low publicity events and church-based small group discussions.

Unfortunately this strategy has produced a significant drawback; the perception that the lack of black publicity attributed to protesting intraracial crime characterizes the black community as insincere with regards to their ideology of “black lives matter”. This behavior leads a non-black individual to believe that for black individuals black lives only matter when non-blacks do the killing. Now one could protest the accuracy of this belief, but understand that perception is what matters here. At the moment the black community needs to amply the publicity of their outrage regarding all violent crime against blacks to truly live up to the creed “black lives matter.”

Note that in the media back and forth on this issue the term “black-on-black” crime is commonly utilized and is commonly attacked. The problem with the black community and its allies rejecting the idea of “black-on-black” crime is that the black community typically initiates the discussion of race as an element to crime when they engage in these large demonstrations protesting the death of a black individual at the hands of a non-black individual. While some could argue that the motivations of these protests originate because the assailant is a police officer, thus assigning the negative reaction to the abandonment by that officer of his/her duty; one must ask the question: would these protests be of similar size and intensity if an Asian or black officer committed the shooting over a white officer? While sad, it is hard to believe the answer would be yes. Overall it is logistically peculiar to suggest that “black-on-black” crime does not exist as the term is simply used to categorize criminal activity when the assailant is black as well as the victim just like “white-on-white” crime categorizes criminal activity when the assailant is white as well as the victim.

A lesser problem facing the black community when developing leaders is that no major national individual or organization seems to be interested in doing a lot of legwork to develop those leaders. The philosophy appears to be: hope that quality leaders “magically” evolve on a local level then these local leaders move on to the national stage, driven by their own pride and ambition, and produce a wealth of new strategies to solve problems in the black community. This belief is almost laughable because without any support from the national leadership it is almost impossible to expect local leaders to become national leaders. There are a few black centric leadership conferences like the Whitney M. Young Jr. Urban Leadership Development Conference or the National Black MBA Association, but those cater more to individuals already in some form of leadership position.

There seems to be a lack of focus on developing young leaders like those only in their teens without any professional experience or ties. This strategy is troublesome because it limits the ability to develop leaders by relying on early self-determination to produce the candidate pool, which for all races and ethnicities is not as large as one would hope. If this mindset is correct then the good news is that such a situation is easily correctable. For example an organization like the NAACP could establish an independently managed non-profit organization that could fund annual leadership conferences for high-school students (14-18), including travel vouchers for attendees, in a variety of states. These new conferences could inspire new confidence in aspiring leaders in addition to giving them a place to refine their skills.

As noted above the loss of solution potential is not the only concern with the leadership development strategy in the black community. The loss of future leaders also hurts the community because there is no one to replace leaders that are unable to produce breakthroughs or even try to produce breakthroughs; this lack of a viable replacement pool is one of the major reasons why leaders from the 60s and 70s are still wielding a large portion of black political power in the 00s and 10s.

One of the most important elements of leadership in the black community is marshalling enough political power to produce positive solutions to existing relevant problems. However, available political power appears in short supply leading the black community to frequently lament being treated as political pawns. Black individuals generally seem to believe that the Democratic Party neglects their value where as on a general policy level voting for the Republican Party is self-sabotage. Despite some past dreams the idea that blacks could form a viable third party is instantly defeated due to the lack of demographic power. Faced with these conditions how can the black community exert political power?

This question is a difficult one in a democracy for any group with a significant minority. One way to produce more efficient political power is to frame solutions in the context of how they help society, not how they specifically help the black community. Sadly the reality of debate will preclude the ability to argue for certain solutions from a black perspective. There may be too many individuals who when hearing the framing of a solution from the black perspective will simply think, “black people are complaining again” and ignore the remainder of the solution. Therefore, even if the solution is ideal for a vast majority of the population, framing will affect how many individuals not already aligned with addressing the problem will hear the solution and thus be convinced.

Another, potentially risky strategy, is that the black community could take action to demonstrate their value to the Democratic Party by “sitting out” an election cycle. For example in a non-Presidential election cycle the black community can place the bulk of its voting power behind a third party candidate that innately supports its solutions. Understandably in all national elections along with a number of local and state elections this candidate will lose, remember the lack of voting power.

However, in a number of races without the support of black voters various Democrat candidates that could have won will also lose. This method utilizes the idea that sometimes one has to fail in a battle to win a war for the winners of these elections will more than likely not support positive solutions to black problems, but the extent of black voting importance will be demonstrated. The above strategy is risky because it may damage trust between the black community and other members of the Democratic Party. Other members may view such tactics as unnecessary strong-arming producing significant damage for little reward.

Outside of political power what are other solutions that the black community could apply with strong leadership to solve the above major problems affecting them?


Black poverty is a complex issue that is influenced by numerous elements. A significant element of poverty can be attributed to racism, but the most influencing aspect is not current racism, but past racism. Racism in the past significantly hampered the ability of black families to generate intergenerational wealth. Most rich individuals, regardless of race, do not produce most of their wealth within their own lifetime; note that most who did largely relied on significant over-evaluated Internet companies or irrelevant consumer products, but instead the wealth was built starting with a great grandfather who passed down some of that wealth and opportunity to the grandfather who built on it and passed it down to his children, etc.

For blacks, even after the passage of various civil rights laws in the 1960s, acquiring a high enough paid job to produce intergenerational wealth was difficult. In general less wealth means less opportunity, thus reducing the ability to produce wealth, which is why most people that generate large amounts of wealth over a short period of time require non-traditional avenues like a hot, yet more than likely socially unimportant Internet or consumption product. With less ability to produce intergenerational wealth, the black community has had a more difficult time producing overall wealth, outside of entertainment occupations (actor, singer, professional athlete).

As tempting as it may be to some to simply say racism, it is not solely to blame for this situation. Black alienation of intelligence has also played a significant role in the larger poverty rates, especially as mechanization and outsourcing eliminates lower skill set jobs and competition for those jobs increase due to changing ethnic demographics. Black education and growth opportunities have also been negatively affected by the lack of cohesive family structure. However, contrary to the beliefs of some a new large wellspring of marriages and stable families will not produce significant positive movement in black economic opportunities.

A popular talking point in both political parties has suggested that the unemployment and underemployment problem in the U.S., regardless of race, can be eliminated solely though education. This idea is laughable for education by itself merely makes one a potentially more attractive applicant, but does not directly create new jobs. The simple fact is that with technology and outsourcing the total number of well-paying jobs ($45,000+) are falling out of favor replaced by $20,000-$30,000/year service jobs. The best paying jobs are difficult to acquire because they are almost reserved for individuals of privilege that have the right connections, specific education, and resources. Noting this point of reality is not to say that good jobs do not exist, but to combat the simple-minded idea that if only members of the black community get educated and married they will be greeted by a multitude of new jobs.

Overall the black community should be looking first for an economic solution to poverty, not a “race” solution because the poor are severely disadvantaged in the U.S. regardless of race or ethnicity. One of the best means for blacks to produce positive economic opportunities appears to be fighting for a guaranteed basic income. A guaranteed basic income will mitigate some of the disadvantages associated with the lack of intergenerational wealth acquisition and increase the probability for establishing new businesses that could produce new job opportunities to make expanding education financially meaningful. However, when has any black leader ever even mentioned the value of a guaranteed basic income?

Societal reclusiveness:

This solution is rather easy; the black community simply must accept the fact that it is in their best interest to work with other races and ethnicities. They need to get over the idea that interacting with other races, especially whites, in a political or economic way is “selling out”. The question is why doesn’t the NAACP, Urban League, and/or National Action Network partner with organizations like AFL-CIO or the RAND Corporation to improve their chances of producing high-quality solutions and getting those solutions implemented? Do these institutions believe that such an alliance will cost them credibility in the black community? If so, why not be assertive and through selection of an appropriate alliance dispel the “sell out” myth?

Relationship with the Criminal Justice System:

Based on the above analysis regarding the risks of engagement, the black community needs to take the first step to mending the rift between police and their community by accepting the reality that the core of the justice system is not racist; it may have some racist officers, but the system as a whole is not racist. Characterizing the police in general as racist due to a very small percentage of racist officers is akin to and as equally foolish as those who characterize blacks as nothing but criminals because a small percentage of blacks break the law. Blacks do not like being stereotyped as criminals, so why are so many willing to paint the police with a similar stereotyping brush?

The biggest issue between the two sides appears to be trust, thus as mentioned above the formation of small organizations to act as a liaison and work with police should significantly increase the level of communication and thereby trust between police and black communities. This interaction would help to reassure blacks that their rights were protected by what could be interpreted in their community as a less corruptible “checks and balances” system. This organization could also act as a go-between for black individuals who want to report criminal behavior, but may be apprehensive to talk directly with the police. Note that the liaison structure should not be a single person, but an actual organization with multiple individuals who have legitimate relationships with members of the police department.

The lack of positive male and female influence in the black family –

Of all the concerns facing the black community this one appears to be the one that black individuals are most aware of and want to fix. Certainly one cannot cite a lack of trying to resolve this problem. However, how to ensure positive influence from both sexes remains unclear otherwise such a problem would have already been solved. One obstacle could be economic in that without jobs black men are not confident that they can be good parents or mentors (i.e. the thought process may be that what kid wants to look up someone who does not have a job). Another obstacle may be emotional arrogance where black men do not feel any inherent responsibility for caring for children. A third obstacle could be the lack of a father figure in their own lives when growing up, which limits the ability and/or knowledge possessed by these men to transmit to children as fathers. Basically they do not know how to be fathers, thus they do not bother to try.

If these above obstacles are genuine then one helpful strategy would be to expand the level of adult mentoring. Instead of simply bypassing adults to mentor children directly, a more effective strategy could involve mentoring adults that lack sufficient confidence or skills to be quality mentors and/or parents. Pride can be a stubborn thing, thus there are times when one simply should ask if a person would like assistance rather than waiting for that person to ask. Another strategy could be for famous black individuals with stable families to occasionally preach the value of families when they hold autograph sessions or have other interactions with the public instead of passing that buck to the church.

General negative attitudes black youth have towards education –

One of the chief failures of the current black leadership is the lack of publicity associated with black intelligence. This failure largely entails the lack of emphasis on the positive elements of education and intelligence. Basically most black youths receive the standard messages that it is important to do well in school, that being smart is cool, a mind is a terrible thing to waste, etc. However, the generalities of these messages limit their usefulness, especially when intelligence has been passively associated with “selling out” and “not being black enough”. Therefore, black leadership needs to apply faces to the message of education being important.

An example of this failure can be applied to be emergence of Neil degrasse Tyson. Mr. degrasse Tyson did not become a popular figure in science due to initial popularity and respect in the black community that was later noticed by non-black communities. Instead Mr. degrasse Tyson become popular in non-black communities, most notably white, and appears to still lack significant popularity in the black community. Despite individuals like Mr. degrasse Tyson being valuable icons of intelligence that later produces significant success, black leadership groups like the NAACP do not appear interested in exemplifying this reality in effort to dispel the idea that blacks cannot be smart or smart blacks only sell out.

In addition to the above strategy, leadership needs to focus on invoking pride in black youth to do their best to achieve success in society, in school, in athletics, etc. not just one single aspect. For example when do black youths ever see innovative science projects or poetry crafted by other black youths? Also there needs to be more effort applied to demonstrating the end result of education. Basically young students, regardless of race, need to know that learning “subject matter x” is valuable in the career they want to pursue. Having this knowledge will increase the level of interest and motivation for education.

In the end while there are significant problems, both intraracial and interracial in the black community, an important step to solving these problems is ensuring that black leadership has credibility and an important element to establishing credibility is accountability. Without accountability, i.e. the ability to lose their leadership position and influence, leaders can behave however they want without repercussions and typically that behavior will not end well for those they lead.

In addition to producing new leaders to ensure accountability, black leaders must increase the level of specificity in their solutions, especially on a quantitative level. When proposing a solution, specifics, including all relevant assumptions, are critical because it demonstrates two important features to providing legitimacy. First, specifics demonstrate that the authors have thought about the issue and the ramifications of developing and executing a particular solution beyond simple pleasant focus group approved sound bytes. Second, specifics demonstrate a willingness to be proven wrong. Whether or not the authors accept errors in their analysis and change accordingly is unknown, but at least by providing specifics the authors produce the means to allow others to expose errors without the protective cloak of ambiguity.

Overall in order to solve the problems facing the black community, especially the intraracial ones, black leaders must start to lead by example. Leading by example does not involve participating in protest marches, but actually producing detailed strategies and ideas on how to address the above problems plaguing the black community, publicizing them with vigor and discussing them in public forums to maximize their potential validity, and be willing to withdraw from their leadership positions if they are unable to make sufficient progress. In association with a new assertive leadership the black community must be more engaged with their leadership beyond a simple protest level and be willing to move on from a particular leader if he/she cannot produce at least templates for positive solutions.

Wednesday, January 21, 2015

The Current State of Alzheimer’s Disease Treatment

Additional Alzheimer’s disease Blog Post here

Regrettably there remains no effective treatment for Alzheimer’s disease (AD). Current therapies target cholinergic (acetylcholine esterase inhibitors) and glutaminergic (NMDA receptor antagonists) neuronal activity in an attempt to improve symptoms largely associated with cognitive decline.1-3 Unfortunately these treatments are limited in their effectiveness because they do not address the cause of the disease, but rather the symptoms. There are other non-pharmacological treatments that address the detriments of cognitive decline like social measures through various support groups and more personal individualized care. However, while these interventions do what they can to help manage AD, without the development of a viable disease-modifying therapy the natural expansion of AD cases, due to an increasing elderly population, will significantly increase global healthcare costs especially in high healthcare cost countries like the United States. In addition to increasing healthcare costs across the board these cases will also significantly reduce the quality of life for millions.

Based on the success of the amyloid beta (Abeta) cascade theory regarding the development of AD one of the principal recent strategies for creating a future treatment has been utilizing an Abeta antibody that will either prevent plaque formation or break plaques apart hopefully producing positive cognitive remediation for those suffering from AD. Unfortunately while this theory appears reasonable, positive empirical evidence supporting this strategy has proven lacking. In fact numerous Phase II and Phase III studies have failed to demonstrate significant positive cognitive outcomes for these types of drugs versus placebo controls.1,4,5 The two most notable recent failures have been Bapineuzumab and Solanezumab; both were able to reduce fibrillar amyloid concentrations, but demonstrated no significant benefit to cognitive processes.5,6

These results should not be surprising because these drugs represent an older way of thinking about Alzheimer’s disease where plaques are the principal deleterious agent and their elimination is essential for recovery. Unfortunately there is ample evidence that soluble Abeta oligomers, and not their fibrillary associates, are the actually deleterious agents responsible for a significantly level of the symptomology of AD. If this different pathway is correct then the elimination of Abeta plaques should serve little benefit, as seen in the multiple Phase II and III trial failures, and could even been considered negative depending on how those plaques are broken apart (possibly increasing the available concentration of Abeta oligomers).

Now it is believed that Solanezumab can bind to soluble Abeta, which could explain why it performed better than Bapineuzumab, which binds to aggregate/fibrillar Abeta.6 However, the binding activity of Solanezumab was still insufficient to produce a meaningful benefit. Solanezumab supporters believe that if applied early, before symptoms, it may be able to produce a meaningful benefit; however, this belief may be misplaced. If this treatment has to begin that early to produce valid benefit then it will not help many individuals overall, assuming that it ever works for at the moment its “potential” is still theoretical.

There is an additional concern that simply changing the strategy from a fibrillary antibody to a soluble one may cause as many problems as it solves. Despite its fame as the chief element responsible for initiating AD, Abeta has innate roles in the brain that could cause problems if natural concentrations are significantly reduced as would occur in a preventative vaccination/treatment strategy. The two major natural roles for Abeta in the brain appear to be that of an indirect neuronal inhibitory agent and an anti-microbial agent.7-10 This anti-microbial activity may be why producing success from a direct antibody therapy is difficult for numerous previous attempts at active immunization against Abeta has resulted in numerous cases of aseptic meningoencephalitis.11

Regarding the issue of Abeta as an anti-microbial agent, there is a wealth of circumstantial evidence that seems to support such a conclusion and small amounts of direct evidence that demonstrate anti-microbial behavior against certain specific targets.7-9 For example one typical piece of evidence is that AD temporal lobe homogenates contain about 25% more activity against C. albicans on average over non-AD samples.7 However, while Abeta is thought to react against C. albicans it is also suggested that microglia are more active in AD patients than in non-AD patients, thus this increased activity maybe derived from the microglia instead of the Abeta. On a side note this anti-microbial behavior has lead some to conclude that AD can be induced by pathogenic response. Overall there could be two different methodologies behind how bacteria and other pathogenic agents could induce AD.

The first method involves observations that several bacteria contain amyloidogenic proteins. For example the periplasmic outer membrane lipoprotein of E. coli demonstrates a similar amino acid sequence to Abeta peptides and have similar visual structures similar to amyloid.12,13 In addition there appears to be pathological similarity between herpes simplex encephalitis (HSE) and AD.13-15 With this information one could come to the conclusion that either certain bacteria have the ability to mimic Abeta and facilitate extracellular receptor interactions similar to that of extracellular Abeta or the bacteria produce their own Abeta. Either scenario would enhance the probability of an individual developing AD in the presence of a specific infection.

The second method involves the overexpressed synthesis and release of Abeta produced in response to an infection. If Abeta does in fact have an anti-microbial effect then it would make logical sense from a biological standpoint that the body would release Abeta in response to an infectious agent, especially in the brain where immune response is limited. Cumulative infections or a single long duration infection could result in an increased probability to develop AD due to the excess production of Abeta to fight off the infection.

Not surprisingly there are some significant concerns about claims that infections play a significant role in the development and progression of AD. Simply from a general understanding it appears that too often researchers want to tie a particular pathogen to the cause of a given disease, “this bacteria/virus causes this particular type of cancer” is one of the most popular. The problem is that most of the time there is no direct evidence to support such a conclusion beyond the fact that the pathogen is present in individuals who have the condition. Another piece of reasoning that individuals who are pro-infection like to report is along the lines of “well, virus/bacteria x has a similar symptomology and/or pathology in the brain” as was previously mentioned above. The problem with that claim is that there are hundreds of diseases that are very similar to each other in many symptomatic respects, yet have small very meaningful differences in how they originate.

Whether or not infection plays a meaningful role in AD is still under debate, but is more unlikely than likely because early developing cases rarely demonstrate any large bacteria/viral concentrations and a significant number of late-developing cases also have a lack of any abnormal bacteria/viral concentrations relative to the age of the patient. The lack of large bacterial concentrations in the early stages of AD progression, and especially for late-developing cases, leads to one of two conclusions regarding any increase in bacteria concentration in AD patients. First, this increase occurs as a later development due to what could almost be viewed as a weakened immune system because microglia are busy trying to clear the excess Abeta. Second, the increase has little to do with AD and can be considered a coincidental occurrence. Neither of these possibilities involves bacteria as a causal agent.

Overall there is sufficient direct and indirect evidence to support the idea that Abeta has anti-microbial properties. An example of indirect evidence is the increased susceptibility to infection possessed by beta-secretase or gamma-secretase knockout mice.16 Also Abeta, in vitro, is clinically active against at least eight common microorganisms, similar activity to pleiotropic LL-37, a common ‘‘antimicrobial peptides.7

However, despite this biological role, there is currently no published evidence that demonstrates an increase in Abeta synthesis and secretion that later leads to the development of AD. Therefore, it is difficult to conclude that infections genuinely facilitate AD through additional Abeta synthesis. This highlights an interesting aspect of Abeta in that eliminating it or dramatically reducing it increases infection susceptibility, yet infections seem to be unable to produce sufficient concentration changes in AD development.

Regarding the role of Abeta as an “inhibitory” agent in neuronal processes one of the most telling pieces of evidence is the fact that benzodiazepines, an inhibitory agent, reduce secretion of Abeta peptides from hippocampal slice neurons.10,17 Also overexpression of amyloid peptide precursor (APP) significantly reduces excitatory activity.10 Other evidence suggests an “inhibitory” similar role through excitatory depression, but under normal physiological conditions this excitatory depression possesses a level of minute control resulting in enough of an impact to quell hyper-excitation, but not enough to cause short-term or long-term damage.

There is some research that suggests a connection between epilepsy and AD. A number of studies claim that increasing Abeta42 concentration in mice increases the probability for progressive epilepsy.18,19 One interesting component is whether the role of Abeta with regards to its influence on neuronal excitability changes as it changes states from monomeric to oligomeric to proto-fibrillar/fibrillar. While Abeta as a monomer/oligomer construct appears inhibitory, the ability of fibrillar Abeta to interfere with membrane fluidity allows it to influence neurons in an excitatory manner.

Another piece of information that supports a role for Abeta in neuronal firing is that calcium imaging shows hyperactive neurons clustering around amyloid plaques in the cortex of APP/PS1 mice.20 There are two common explanations for this result: 1. the neurons begin to hyperexcite, which prompts the release of Abeta peptides (40, 42, 43, etc.) in an attempt to inhibit further excitation. However, during the process of inhibition these peptides begin to coalesce into proto-fibrillars and plaques which then somehow rejuvenate hyper-excitation (most likely by changing membrane fluidity and resting potential) creating a small, but progressively positive feedback loop; 2. the neurons begin to release Abeta peptides in larger than normal concentrations, perhaps due to infection or genetic mutation, these Abeta peptides then form proto-fibrillars and plaques to facilitate a new found hyper-excitation.

One method to determine which of the above explanations is more logical is to distinguish between the influence of the proto-fibrillar Abeta and monomer/oligomer Abeta. Basically ask the question: do proto-fibrillars really induce additional excitatory behavior or are they simply blocking the ability of the smaller Abeta species (monomers, dimers and oligomers) to reduce excitatory activity?

For example it makes sense that an Abeta oligomer binding to an extracellular receptor induces a neuronal depressed response with the receptor eventually discarding the bound Abeta, but that unbound Abeta could still be available to bind another receptor if necessary until it is cleared. If proto-fibrillars act as a form of blockade it would be expected to limit the binding ability of oligomer Abeta reducing the ability to lessen excitatory behavior. However, the reduction of this damping ability does not explain why non-epileptic individuals would develop epilepsy, there must be an additional excitatory agent. Thus it stands to reason that the first answer is more favorable in that proto-fibrillar Abeta induces excitatory activity.

However, there are questions regarding the timing of epilepsy development because hippocampal neurons become hyperactive early in transgenic mice whereas hyperactivity in the cortex is temporally linked to plaque formation.20 One possibility to explain this apparent contradiction is that soluble Abeta has a higher probability of inhibiting inhibitory elements in the hippocampal space versus the cortex due to neuronal architecture. Regardless of this hyperexcitation issue, typically epilepsy in AD patients increases in probability with disease progression, which corresponds to a greater development and concentration of proto-fibrillar Abeta, especially relative to oligomeric Abeta. Therefore, it stands to reason that under normal conditions and early to moderate AD Abeta concentrations are chiefly inhibitory agents (regardless of what type of neurons they are inhibiting), but due to the differing activity Abeta slowly becomes more excitatory as the AD advances.

This method of inhibitory action extends not only to competitive binding between glutamate and Abeta on NMDA and AMPA receptors, but also induce endocytosis of AMPA receptors reducing excitatory binding probability.10,21-23 The endocytosis of AMPA receptors may explain why neurons that are hyperexcited with APP over-expression mutations are not immediately reversible, but over time become reversible with cessation of immediate neuronal firing.10 The lack of Abeta also induces GABAergic neuron sprouting, which may be driven to compensate for the hyperexcitation.24

In the end any future attempts to develop an antibody-based therapy for Abeta will have to determine how the presence of the antibody will influence the two natural Abeta processes. While there has been some initial and isolated success from studies that have demonstrated some protective benefits for auto-antibodies of a AB oligomer subset,25-26 it is difficult to “hand wave” away the negative results associated with previous antibody tests that resulted in cases of aseptic meningoencephalitis.

Recall that the production of Abeta begins with APP, which is a transmembrane protein that has three principal isoforms, 695, 751 and 770, each containing the 4 kDa Abeta peptide and is synthesized in the rough endoplasmic reticulum and glycosylated in the Golgi apparatus.10 Three types of secretase enzymes interact with APP. Endopeptidase alpha-secretase cleaves within the Abeta region, eliminating any opportunity to form an Abeta peptide. If APP is not cleaved by alpha-secretase then APP can be incorporated into endocytic compartments for cleavage by beta-secretase and/or gamma-secretase.

Beta-secretase cleaves APP at the N terminus of the Abeta peptide sequence and gamma-secretase cleaves at the C terminus. When beta-secretase cleaves APP it generates a secreted ectodomain beta-APP and a 10-kD COOH terminal fragment (beta-CTF).27,28 This beta-CTF fragment is the substrate for gamma-secretase, which cleaves the transmembrane domain of APP producing an Abeta fragment.28 Gamma-secretase can cleave at multiples sites creating multiple length Abeta peptides (typically 40, 42 and 43).27 However, if gamma-secretase cleaves APP before beta-secretase, the end product cannot be converted to Abeta. Therefore increasing alpha-secretase concentration/activity will decrease Abeta concentration, increasing beta-secretase concentration/activity will increase Abeta concentration, and increasing gamma-secretase may or may not (depending on other factors and just simple luck) increase in Abeta concentration.

While the functionality of the various secretases is straightforward, the development of a viable inhibitor is challenging for it faces three separate problems. First, secretases, especially beta, have multiple substrates and large substrate binding domains, so competitive inhibitors are typically short-lived and non-competitive inhibitors have difficulty achieving full inhibition. Second, inhibitor candidates must be able to cross the blood-brain barrier. Third, secretases are also involved in other important biological processes, thus inhibition must be conducted carefully otherwise numerous unfavorable side effects will accrue making long-term treatment difficult. For example beta-secretase is thought to be necessary for proper function of muscle spindles due to its interaction with Neuregulin-1, thus long-term beta-secretase inhibition may already be a non-starter.29

Another problem with secretase interaction is the relationship between gamma-secretase and intramembrane cleavage of Notch receptors, most notably Notch1.30 The Notch pathway in general is important for neuronal function, cell communication and cell homeostasis in developed brains. Therefore, due to this relationship while gamma-secretase inhibitors typically reduce Abeta concentrations in plasma and cerebral spinal fluid (CSF) they also produce side effects like haematological and gastrointestinal toxicity, skin rashes and changes in skin color.30 The failure of a Phase III trial of Semagacestat (a highly touted gamma-secretase inhibitor), after a successful Phase II trial, was due to the above mentioned side effects including the development of non-melanoma skin cancer as well as a dose-related worsening of cognitive measures.31

The above problems make it difficult to believe that a secretase inhibitor will be a long-term answer for AD treatment. There was a growing trend towards redirecting attention away from an inhibitor agent and towards a modulator agent. A modulator can shift the APP cleavage site maintaining the relationship with the Notch receptor, thus creating a possibility to reduce Abeta concentration while reducing side effects. Unfortunately one of the first modulators, Tarenflurbil, failed to produce any positive results regarding AD treatment;32 thus modulators may not be a strong choice for future AD therapies.

Also research has been invested in a naturally occurring monosaccharide, NIC5-15, that can function as a gamma-secretase inhibitor that somehow avoids interfering with the Notch relationship as well as increasing tissue sensitivity to insulin reducing insulin concentration.33,34 However, there is the lingering concern about how this inhibitor will influence the natural roles of Abeta in the body and its lack of any meaningful studies beyond a very simple Phase II trial.

One of the more interesting and potentially important elements in the progression of AD is the location of beta-secretase and gamma-secretase relative to each other and APP. A point of interest is what role lipid rafts play in dictating how APP is processed. Lipid rafts are lateral assemblies of cholesterol and sphingolipids which form ordered platforms that move through the matrix of a cellular membrane that can compartmentalize various membrane processes. Due to this compartmentalization they can produce microdomains that provide an efficient environment for molecule assembly and membrane protein trafficking as well as influencing membrane fluidity.

The involvement of lipid rafts in Abeta processing is supported by multiple stages of evidence. First there is reason to suspect that beta-secretase needs to be associated with lipid rafts to even be active let alone interact with its APP substrate.28 Whether or not gamma-secretase is also inactive when outside of a lipid raft is unclear, but it appears that significant activity takes place on lipid rafts.35-37 Bolstering support for the involvement of lipid rafts is the identification of various AD related proteins in lipid rafts from both human and mice brain. Currently Abeta40, Abeta42, presenilin 1, beta-secretase, APP, beta-CTF and alpha-CTF have all been isolated from lipid rafts.28,38,39

In addition beta and gamma-secretase induced cleavage seems to depend on endocytosis of APP. The requirement of endocytosis suggests that beta-secretase interaction does not occur at the cell surface. This requirement may be because surface APP and beta-secretase are either floating freely in the cellular matrix or on separate lipid rafts.28 Therefore, it seems to make more sense that the interaction between beta-secretase and APP occurs after endocytosis during the amalgamation of various lipid rafts within endosomes. Interestingly the appearance of larger endosomes is a typical precursor to AD progression, which could support this idea, i.e. the endosomes grow larger to accommodate the coalescence of the lipid rafts due to greater cholesterol and/or Abeta levels.

There is also evidence that significantly increased concentrations of Abeta begin to appear in lipid rafts before even symptoms begin, but these studies did not compare concentrations of Abeta in the lipid rafts to concentrations of Abeta in the intracellular or extracellular matrix, thus this result cannot be used as conclusive evidence that Abeta synthesis originates or is dependent on lipid rafts.40 Another important distinction is that some estimate slightly over 20% of brain Abeta on lipid rafts,40 with lipid rafts only constituting 0.4 to 0.8% of a given plasma membrane.41 While the estimate relative to lipid raft compartment space is for only one particular cell type, there is little reason to believe that the amount of lipid rafts varies significantly between different cell types.

For the moment assume the following regarding Abeta synthesis and lipid rafts:

- The overall size of a lipid raft is largely dictated by the total amount of elements available to form it;
– beta-secretase is only active on a lipid raft and produces beta-CTF;
- gamma-secretase is in close proximity to lipid rafts; whether or not it only active on a lipid raft is unknown;
- alpha-secretase is not localized on lipid rafts;

Based on the above information it makes sense that removing cholesterol from plasma membranes would significantly increase membrane fluidity (by shrinking the total number and size of lipid rafts). Increasing membrane fluidity would increase lateral movement of APP and alpha-secretase in the plasma membrane possibly increasing alpha-secretase activity. Also reducing the availability of lipid rafts should also limit beta-secretase activity making alpha-secretase interaction more likely, which has been supported experimentally.42

There is an interesting side point here in that recall fibrillar Abeta in the extracellular matrix is thought to change membrane fluidity with a higher probability for an increase in fluidity over a decrease. If this change in fluidity actually occurs then it could act as a negative feedback mechanism. After enough Abeta is secreted into the extracellular matrix that leads to the formation of fibrillar elements, like plaques, an increase in membrane fluidity should occur that would negatively influence lipid rafts reducing the probability for further Abeta synthesis until the fibrils are cleared from the extracellular matrix. If this is the case then plaque busting drugs could worsen AD in multiple ways, not only by breaking down fibrils and plaques making more toxic oligomers, but also eliminating a negative feedback mechanism which could limit Abeta synthesis.

There appears to be two different avenues when APP can associate with a lipid raft: 1) during transit between emergence from the Golgi body and becoming a transmembrane protein, newly synthesized APP could interact with lipid rafts; 2) during the endocytosis process where the APP is re-internalized through clathrin-coated pits.43,44 Without direct evidence it appears more reasonable to assume endocytotic recycling as the dominant lipid raft interaction process simply because it is more frequent of the two.

Further support for the importance of lipid rafts involves the behavior of its building blocks. Cleary numerous studies have demonstrated that increasing cholesterol leads to an increase in AD development probability in addition to lipid rafts, but cholesterol is not the only element that makes up lipid rafts. What happens if sphingomyelin levels are altered? The initial assumption would be that increasing sphingomyelin levels would lead to a corresponding increase in lipid rafts and Abeta synthesis. However, this does not appear to be the case in at least one study. When down-regulating sphingomyelinase (SMase) and up-regulating sphingomyelin-synthase activity, both actions increase available sphingomyelin, intracellular and extracellular Abeta levels decrease.45 The same result was acquired when foregoing enzyme manipulation and directly increasing sphingomyelin levels.

There are two immediate possibilities that could explain this result. First, the ratio between the total levels of cholesterol to sphingomyelin may influence the structure of the lipid raft where higher sphingomyelin levels create a raft formation that reduces beta and/or gamma-secretase activity. Second, maybe cholesterol is not directly responsible for changing Abeta concentrations, but instead an associated molecule that frequently increases and decreases in consort with cholesterol levels is actually influencing Abeta concentrations. While this second possibility is possible it must also address how different variations of ApoE dramatically change the probability of developing AD, which makes it unlikely.

Increasing sphingomyelin also appears to increase concentrations of C99, the byproduct of beta-secretase processing.39,45 Therefore, it can be reasoned that these changes in both C99 and Abeta concentrations are the result of a reduction in gamma-secretase activity more than likely due to a reduced ability to interact with C99 due to lipid raft proximity issues rather than direct reduced gamma-secretase synthesis or increased inhibition.

Unfortunately there could be another positive feedback effect relative to Abeta42 as Abeta42 directly increases SMase activity and reduces sphingomyelin-synthase activity.45 Such a result is interesting because does that mean SMase is also located on lipid rafts? There is no reason to immediately assume that this inhibition/activation will lead to a significant increase in Abeta concentration because if the change between the destruction/creation dynamic is altered too much in favor of destruction it will lead to the breakdown of lipid rafts halting Abeta synthesis. There could be a problem in that any loss of sphingomyelin in the rafts may be accommodated by an increase in cholesterol deposition. If in fact the lipid raft ratio between cholesterol and sphingomeylin does matter with respects to gamma-secretase activity and overall Abeta production then such an outcome could indeed worsen the progression of AD.

Another important element involving lipid rafts is that Abeta appears to inhibit sphingosine kinase-1, an enzyme that is chiefly responsible for balancing ceramide and sphingosine 1-phosphate (S1P).46 Ceramide significantly influences many stress signals, which can result in ceasing cellular growth or even cell death where S1P neutralizes the effects of ceramide.46 Thus Abeta, separate from other AD-related mechanisms, can induce cellular death by increasing ceramide concentrations and decreasing S1P concentrations.

There may also be a relationship between SMase and ceramide, which could also act as a positive feedback mechanism relative to the toxicity of Abeta.46 Finally IGF-1 is able to stimulate sphingosine kinase-1 activity neutralizing ceramide, thus this method may be how IGF-1 provides its neuroprotective effect relative to AD. Overall a strategy that focuses on influencing lipid raft configuration or their associated elements may be an interesting means to help neutralize AD because it avoids direct inhibition of the secretases and could allow for more fine control of Abeta concentration.

As mentioned above although there are still numerous concerns with implementing an Abeta antibody treatment strategy, another potential antibody strategy that has gained some favor in recent years is treatment with intravenous immune (or immuno) globulin (IVIG). In 2002 it was determined that the IVIG element Octagam had the tendency to possess antibodies for Abeta, which fostered the idea that IVIG could be used to treat AD.47 Early evidence from a small Phase II IVIG efficacy study showed improved cognition in mild to moderate AD patients and reduced Abeta in CSF.48,49

Unfortunately this initial promise was marred by the failure of IVIG to demonstrate any improvement in cognitive scores in a larger Phase III study and an additional Phase II study.50,51 The failure to reproduce the positive results from the Phase II study in the Phase III study limits the hope that IVIG could be a useful treatment in the future. Some argue that one bright spot is that IVIG did improve cognitive ability in ApoE4 carriers, otherwise commonly regarded as those who are genetically inclined to develop AD versus more spontaneous development.52 Note that from a safety standpoint both studies did support a positive safety profile for IVIG in AD patients.

Unfortunately even if the Phase III results were positive, one of the major obstacles to producing an effective IVIG based treatment is that lack of uniformity in the concentrations for different samples. According to FDA regulations IVIG samples must be prepared from the plasma contents of at least 1,000 individuals with all IgG subgroups (1-4) present, purified by removing all other blood elements and must be utilized or properly stored within 21 days of its creation. This process produces a non-uniform IVIG product that may have differing concentrations and types of antibodies versus another sample produced in a different laboratory. One sample may contain antibodies to Abeta and tau whereas a second sample may only contain antibodies for tau.53

Another drawback is that it takes approximately 9 months to produce an IVIG sample, thus mass production on a typical pharmaceutical scale is not possible creating a dearth in supply potential. Some have suggested alleviating this problem through new manufacturing processes or use of recombinant strategies, but neither of these suggestions have been incorporated into a large-scale production line, thus limiting their predictive power.54 Also unless more people donate blood in general increasing production with a natural product base will be very difficult.

This supply crunch creates various problems both ethically and economically. Not surprisingly IVIG treatment is expensive, costing about $75 per patent gram or $7,500 - $15,000 for the average person; with a fixed price for reimbursement from Medicare it stands to reason that a number of low-income individuals could be priced out of long-term IVIG therapy55 (new infusions would be expected between every two to five weeks), which is standard protocol and would be applicable for AD. IVIG is also used to treat acute infections and some other conditions most notably immune deficiencies and autoimmune diseases. With the limited supply availability transferring IVIG samples to AD patients would hurt these other patients, that is of course if IVIG was an effective treatment for AD, which has not been sufficiently or appropriately demonstrated.

Finally IVIG treatment is not without its own side effects most notably increased probability for thromboemboli due to increased serum viscosity reducing blood flow, especially in individuals with existing vascular difficulties and/or abnormalities.56,57 Another concern is the increased probability for a decrease in white blood cell, red blood cell and platelet concentrations including increased platelet aggregation, which could also exacerbate thromboemboli potential.58-60 A decrease in hematocrit levels is thought to occur through high-molecular weight IgG complexes binding red blood cells increasing sequestration.61 Despite these drawbacks one positive produced from the IVIG clinical studies is the idea that a multi-antibody therapy should be superior to a monoclonal antibody therapy. However, one of the new problems with such a strategy is identifying the antibodies, outside of Abeta, that should be included in such a therapy.

If IVIG is not a direct treatment option, an indirect treatment option could be gleamed from how IVIG affects the inflammatory response where IVIG inhibits complement activation, modulates chemokine expression and regulatory T cell subsets, and negatively influences inflammatory cytokines.52,62,63 One of the more notable results is a decrease in the ratios of IL-5 and IL-12 relative to IL-10 (i.e. either IL-5/IL-12 concentrations decreased or IL-10 concentrations increased), which is thought to decrease the rate of atopy.64,65

Another important element influencing the inflammation response of IVIG is the role of IgG Fc fragments, which involve a glycan component with a terminal sialic acid.66,67 IgG fragments with sialic acid bind to human receptor dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN) or its murine orthologue (SIGN-R1).68 These binding targets are thought to stimulate immunosuppressive action reducing inflammation, which could reduce AD severity. However, this anti-inflammatory behavior requires a high dose, if IVIG is the source provider, because only a very small percentage of IVIG contains elements with sialic acid. This high dose may be also be prohibitory for treatment due to side effects.69

Another theory behind the effectiveness of IVIG is how it interacts with a more exotic version of Abeta. For example some evidence has demonstrated a significant decrease in soluble Abeta56 oligomer concentration after IVIG treatment. Similar to Abeta42, Abeta56 is another abnormal Abeta isoform that is thought to increase the probability of AD development and influences cognitive impairment on a concentration dependent level.68 One reason for this influence is that Abeta56 increases the expression rate of tau and its effect is negatively influenced by drebrin and fyn kinase availability available in IVIG treatments.65

A more controversial issue with IVIG treatment is whether or not any positive influence is drawn from its facilitated decrease in CD4/CD8 ratio. Various research has produced results were AD patients have increased,71-72 no significant changes73 or decreased74 CD4/CD8 ratios. Overall it is difficult to conclude, either through IVIG treatment or in general, whether influencing CD4/CD8 ratios is an intelligent therapy strategy for treating AD. In addition to CD4/CD8 ratios, IVIG also appears to decrease the concentration of YKL-40, but outside of being a marker for advanced AD there is little belief that manipulating its concentrations could prove useful as a therapeutic.75

With the failures of Abeta antibiotic therapies and the difficulties associated with IVIG confirmation and production some researchers have turned their attentions to attacking tau as a treatment methodology. One of the major reasons tau looks promising is that its pathology appears to correlate better with dementia severity than Abeta. Based on some research tau supporters argue that tau is actually responsible for Abeta toxicity.4 However, there is a significant problem with this enthusiasm namely that while misfolded and hyperphosphorylated tau does lead to generic dementia, it fails to develop into AD without the influence of Abeta.76,77 Also the argument that Abeta toxicity is dependent on tau only appears applicable to fibrial Abeta not soluble Abeta, which is of little consequence because fibrial Abeta has low direct toxicity overall.78 These issues have created conflict between Abeta and tau proponents regarding which element is worth neutralizing.

Regardless of its lack of AD initiation tau could be an important theoretical therapeutic target. A quick reminder that tau is a microtubule-associated protein (MAP), which is important for the proper stability and functioning of microtubules. The general understanding behind tau toxicity follows a similar pattern to that of AD. Hyperphosphorylation of tau negatively influences its affinity for microtubules increasing microtubule structural degradation and increases the probability that monomer tau form oligomers, paired helical filaments (PHF) and neuron fibrial tangles (NFTs). The breakdown of microtubules reduces in axonal transport leading to synaptic starvation and retrograde degeneration.

However, originally most believed that the neurotoxicity of tau was born from NFTs whereas more and more recent evidence supports the tau oligomers being responsible for a majority of damage.79-81 Some may suggest that this more severe oligomer toxicity does not correlate with increased NFT load and distribution markers for AD progression. This result is not contradictory because more tau oligomers equals more damage, but also increases the probability for more NFTs.

One of the interesting elements of tau is the idea of a positive feedback mechanism that makes it self-propagating, similar to a prion.82,83 If such a methodology is correct, then AD treatment would require one of two strategies: 1) treat AD before this self-propagating positive feedback mechanism is activated by Abeta; 2) the tau mechanism must be neutralized to a point that disallows the occurrence of this self-propagating mechanism. Otherwise tau is not addressed, the treatment may neutralize AD, but the continued expression of tau could lead to another form of dementia.

There is two major schools of thought with regards to neutralizing the effects of tau: 1) influence tau phosphorylation; 2) influence tau aggregation. The first option typically involves using elements that will inhibit phosphorylation of tau whereas the second option typically involves using elements that will either prevent tau aggregation or enhance aggregate disassembly.

Not surprisingly the first option has been explored on a greater level than the second owing to the idea that hyperphosphorylation stems from an abnormal ratio of activation between glycogen-synthase-kinase-3 (GSK3), which is responsible for phosphorylation, and phosphatase PP2A, which is responsible for removing phosphates from tau.84 There was some early promise seen for influencing tau through the inhibition of GSK3 by way of either lithium or valproate, two treatments that are commonly used in psychiatric disorders with relatively stable safety histories and protocols. In addition both are thought to enhance neuroprotective effects by upregulating anti-apoptotic factor BCL2.85

Unfortunately despite these positive effects, small studies involving lithium treatment in patients with mild Alzheimer’s disease demonstrated no change in CSF biomarkers or any cognitive benefit.86 Granted some explanations for this result could be the study’s short time frame (6 weeks) and the mild condition of Alzheimer’s could limit the overall effectiveness of a tau-based therapy because the detrimental nature of hyperphosphorylation has yet to fully occur. Studies with valproate have generated similar results with no positive effects on cognitive or functional status.85,87

Another natural compound that has been targeted as a potential tau therapy is nicotinamide, the biologically active form of niacin (Vitamin B3) and precursor of coenzyme NAD+. Studies in mice have demonstrated that orally administered nicotinamide limits cognitive deficits and reduces concentrations of phosphorylated tau.30,88 There is also some evidence that nicotinamide upregulates acetyl-alpha-tubulin, protein p25 and MAP2c, which are all thought to increase microtubule stabilization, thus increasing the probability of neuron survival.88

There is limited understanding regarding the biological effects of nicotinamide in AD methodology; however, nicotinamide appears to exert two effects relative to tau and microtubule stabilization. First, it upregulates p25 and dowregulates p35, which is thought to increase microtubule stabilization.88 Second, it inhibits SIRT2, which functions as an alpha-tubulin deacetylase.88 While the exact method is still unclear, increasing acetylated alpha-tubulin levels, along with alpha-synuclein activity, increases microtubule stabilization and reduces cognitive degradation, possibly through aggregation stimulation among microtubules.88

The principal method in which Abeta influences the progression of tau phosphorylation leading into hyperphosphorylation and possible altered tau conformations is increasing activation of GSK3, which is activated downstream of NMDA-receptor signaling.89 Other more minor signaling pathways that are also involved are CAMKK2-AMPK kinase and C-Jun N-terminal kinase.90,91 Obviously multiple pathway activation eliminates the ability to fully neutralize Abeta activation of tau with a single molecule, a result that explains in part why a treatment like Mematine is not as effective as it should be theoretically.

In general the progression of tau to a hyperphosphorylated deleterious agent occurs in consistent manner where concentrations of tau dramatically increase in the transentorhinal cortex eventually producing sufficient quantities of NFTs, then concentrations increase in the hippocampal CA1 (II-IV) later advancing into the temporal (V) and isocortical areas (VI).92-94 While this progression can occur through normal aging it is dramatically accelerated in the presence of Abeta despite a lack of direct proximity/compartment relationship, i.e. brain regions low AB concentrations with no plaques can still see accelerated tau phosphorylation due to only a neuronal connection with a concentration heavy region.95,96

The exact method in which Abeta induces greater tau phosphorylation through the above enzymes is unclear, but the three major options are: 1) direct interaction from specific binding of monomeric and oligomeric Abeta to a variety of neuronal receptors; 2) indirect action involving induced inflammation via glial and microglial cells; 3) cross-seeding between Abeta and tau dramatically increasing misfolding and hyperphosphorylation probabilities for future tau proteins.84

Overall the chief problem with attempting to utilize a treatment for tau as a principal therapy is that it is a downstream actor. While tau may produce a meaningful amount of neuronal damage in advanced versions of AD, it is not the only damage producing agent and any treatment would be a chronic one for it would not influence Abeta concentration, the chief upstream effector of tau toxicity. This reality should not eliminate the idea for a tau based aspect to an AD treatment, but should end the idea that only neutralizing tau would be enough.

The importance of ApoE is clear in the development and progression of AD. Therefore, there has been significant study regarding its transcription including the important elements of activation and heterodimerization of the nuclear receptor retinoid X receptor (RXR) along with peroxisomes activated receptors (PPAR) or liver X receptors (LXR).97 In addition to facilitating ApoE transcription these elements also activate lipidators which are thought necessary for the proper functionality of ApoE.98,99 Thus, it seems reasonable that increasing RXR agonists should increase ApoE transcription and possibly even ApoE efficiency. This was the thought process that lead to the utilization of Bexarotene, an older cancer drug, as a possible new therapy.

Early in its testing a result produced by Bexarotene appeared very promising as it upregulated ApoE and other lipidators like ABCA1 in transgenic mouse models of Abeta amyloidosis.100 Furthermore after this upregulation there was rapid reduction in Abeta plaques and increased cognitive abilities.100 However, this success was short-lived for a number of other groups have failed to replicate this plaque reduction and improved cognition result despite also replicating the increased upregulation of ApoE and ABCA1.97,101,102

This lack of replication is troubling because various testing has demonstrated that Bexatrotene is able to active its RXR and lipidator targets effectively, but despite this action it appears that this increased upregulation is not able to consistently and/or effectively remove Abeta. Part of the problem is that there is no single formulation for Bexatrotene, thus the one used in the original research demonstrating a positive Abeta removal result may be significantly different from the formulations used by later work attempting replication.97 Unfortunately it appears that the researchers in the original work have yet to release their Bexatrotene formulation eliminating the ability to address this potential discrepancy.

Another possibility to explain the differences may be the interaction between Bexarotene and the blood brain barrier. Studies identified enhanced Abeta peptide clearance at the blood brain barrier moving peptides from the brain into the blood through an ApoE and LRP1-mediated process.103,104 What this result means is still unclear because of the overwhelming failure to replicate the original results. An additional concern created by this discrepancy is that Bexarotene has some negative side effects like weight loss, hyslipidemia, hypersensitivity, hypothyroidism and leukopenia.105 Therefore, as it stands Bexatrotene or other agents influencing RXR do not appear to be viable treatment agents.

As mentioned numerous times the interaction with cholesterol and AD is important, especially with regards to ApoE, thus some believed that statins could provide an effective means to manage or even treat AD. Despite evidence in animal models supporting the neuroprotection and improved pathology for statins, these benefits have not consistently or effectively transferred to human trials.106 Thus, the question of whether or not statins are an effective therapy option for AD patients is a controversial issue.

If one ignores the results from animal models and humans the initial premise seems plausible in that statins reduce available cellular cholesterol concentrations, which based on the relationship between cholesterol and ApoE, or even cholesterol and lipid rafts, should reduce Ab levels. Lower Ab levels should reduce, if not outright cease, the progression of AD, if provided at an early enough stage. While the initial premise seems to flow logically there are questions to whether or not a high enough concentration of statins enters the brain. In addition cholesterol accumulation and behavior function differently between the brain and the rest of the body due to the blood brain barrier.

In the brain cholesterol is produced almost exclusively from de novo synthesis instead of relying on a combination of de novo synthesis and lipoprotein uptake through LDL, HDL, etc. Without this particular lipoprotein cholesterol relationship the efflux of cholesterol from the brain utilizes 24-S-hydroxycholesterol.106 Not surprisingly patients with early-onset AD have elevated concentrations of 24-S-hydroxycholesterol, which suggests a higher intracellular cholesterol level.106

Suppose that statins are unable to pass through the blood brain barrier at high enough concentrations to significantly influence cholesterol levels in the brain, how can one explain the results that statins do provide some effect, especially in non-AD individuals? If one is to believe that there is an effect, then one possible explanation is that by reducing the cholesterol level in the body, the natural synthesis of Ab outside of the brain is reduced. Thus in individuals with ApoE4, which can bind Ab and transport it across the blood brain barrier, less Ab will be available for transport potentially reducing the amount of Ab inside the brain.107

However, the same probably cannot be said for those with ApoE2 or E3 as there does not appear to be significant Ab transport into the brain from these versions of ApoE. Therefore, if this assessment is accurate then statins could provide a small therapeutic effect for individuals with AD and the ApoE4 isoform, but would be relatively useless for individuals with AD and the ApoE2 or E3 isoform. This theory could also explain why animal models typically demonstrate positive results because a number of models stimulate AD development with ApoE4 mutations.

In the brain ApoE also uses its lipid transport function to aid in the repair of neuronal cells. This attribute was hypothesized when experimenters identified a rapid and dramatic increase (200 fold) in APOE concentration after neuron injury followed by a return to normal levels after sufficient time for repair had pasted.108 The reason behind this dramatic increase is that under normal conditions almost all of the ApoE in the brain is produced by astrocytes, but under states of stress neurons start rapid synthesis of ApoE and more ApoE can be produced by active microglia and even neurons.109

The ability to aid in neuronal repair from most helpful to least helpful among the various ApoE isoforms is E2 > E3 > E4.106 Interestingly it appears that ApoE4 has a negative effect on neuronal repair where ApoD has to fill the repair facilitation role.110 ApoE4 is also thought to play a role in the general mental decline that occurs with normal aging. The principle element of misrepair seems to stem from an inability to support neurite outgrowth, which leads to loss of synapto-dendritic communication in certain parts of the brain.109

With the general failures of existing therapy strategies, some unconventional therapies have been explored like Latrepirdine. Latrepirdine was first introduced in Russia as a non-selective anti-histamine.111 Its mechanisms of action involve the weak inhibition of acetylcholinesterase and butyrylcholinesterase along with inhibition of NMDA receptors and voltage-gated calcium channels.112 Interestingly it also has a secondary mitochondrial protective effect preserving structure and function, especially under stressful conditions. This protection is thought to occur through inhibition of the mitochondrial permeability transition pore, which can be activated by Abeta.113 Based on the reasonable success of Memantine along with the ability of other anti-histamine drugs to demonstrate some positive benefits in treating neurodegenerative disorders, some believed that Latrepirdine could be a boon in AD treatment.

An initial Phase II study demonstrated safe tolerance and a statistically significant improvement in cognitive function and psychiatric symptoms, including an anti-depressive effect, for patients with mild or moderate AD.114 Unfortunately like so many other treatments before it Latrepirdine failed to carry these improvements over in Phase III studies.115,116 Some claim that the failure was in the design of the Phase III protocols not in Latrepirdine. However, looking at the result the benefits seen in the Phase II study were amplified by the placebo group worsening versus having no significant loss of cognitive function in the Phase III study. Therefore, the significance of the Phase II success may have been derived from mischaracterization of how severe the AD cases were in the placebo group versus the actual efficacy of Latrepirdine. Despite this drawback there is still hope that Latrepirdine could be a useful therapeutic agent in the future.

Another somewhat less conventional therapy strategy was the utilization of thiazolidinediones. The two most notable thiazolidinediones available for treatment are rosiglitazone and pioglitazone, which were originally developed to treat type 2 diabetes. Both function by stimulating nuclear peroxisomes proliferator-actived receptor gamma (PPARg) which reduces the expression probability of beta-secretase and APP as well as increasing the probability of APP degradation through ubiquitination.117

In addition to the direct action against APP, the loose connection between insulin action and AD lead some to believe that both of these agents could be used to increase insulin sensitivity reducing insulin concentration. Interestingly enough there is some similarities in the degradation of insulin and Abeta, thus leading to the belief that reducing the concentration of insulin would eliminate an “indirect” inhibition effect on Abeta degradation enzymes.117 Unfortunately neither of these agents have demonstrated positive clinical trial results with rosiglitazone reporting no improvement in cognition or global function and has been further derailed by new cardiac risks from the FDA.118

While developing quality therapies for AD is the principal goal, the extent of damage that is produced during the progression of AD makes the timing of therapy application critical. Therefore, it is important to develop diagnostic methodologies that can detect AD development at early enough levels so a therapy can be utilized to ensure no significant change in quality of life rather than simply hoping for some quality of life. The most reliable non-genetic means to determine if an individual is at an increased risk for developing AD is to observe Abeta42 or tau (both total and phosphorylated concentrations) in CSF. CSF is utilized because despite having a lower protein content versus serum, CSF directly interacts with the extracellular space in the brain, thus it produces an accurate assessment regarding the biological contents of the brain.

As previously mentioned the generally accepted neuropathology of AD occurs decades before the expression of symptoms leading to three main phases of AD development: 1) pre-symptomatic (where most of the damage is conducted); 2) prodromal (mild symptoms mostly focused around episodic memory failures); 3) large-scale memory issues and similar symptomatic features common with dementia;119 Interestingly enough among these three phases CSF derived Abeta42 and tau concentrations only appear to significantly change during the pre-symptomatic stage, i.e. there are only minimal changes during the prodromal and dementia stages.120,121 In addition outside of very specific genetic conditions, Abeta42 concentrations change (increasing then decreasing) before significant changes are seen in tau concentrations.122

The decrease in Abeta42 concentration is thought to occur due to oligomeric concentrations being removed from circulation when they become incorporated into plaques. However, if this behavior is accurate then Abeta42 production must decrease at a greater level than plaque formation during the advancement of the condition; this result would speak to a negative feedback associated between plaque formation and Abeta synthesis similar to the one discussed earlier.

Unfortunately the characteristics of these changes make meaningful detection difficult. The general accuracy of current diagnostic methods is actually rather low with sensitivities ranging from 71% to 88% and specificities ranging from 44% to 71%.123 In general diagnostic biomarkers should produce a sensitivity and specificity of at least 85% to be medically useful.124 Another problem is that inter-assay and inter-laboratory variability produces additional inaccuracies ranging from 20% to 35%.125,126 Note that a sensitivity of 100% indicates a 100% identification of subjects with AD where a specificity of 100% indicates a 100% accuracy in distinguishing between AD patients and non-AD patients.

Overall these inaccuracies creates problems in clinical drug testing as 10% to 35% of individuals clinically diagnosed with AD have negative amyloid PET scans, which calls into question whether or not these individuals actually have AD.127 Some have assumed that if these diagnostic tests are accurate then there should be serious consideration to divide AD diagnosis into two sub-categories: “amyloid-first” and “neurodegeneration-first”.5 Another problem with using Abeta and tau as biomarkers is differentiating between simple old age and AD as old age appears to follow a similar pattern.128

A newer direct method to produce information regarding early progression of AD is amyloid imaging. Researchers at the University of Pittsburgh were the first to produce a reliable imaging strategy by modifying the structure of thioflavine T to include 11C has a positron emitter.5 This altered thioflavine T could cross the blood brain barrier and selectively bind to Abeta. Various other imaging methodologies have been commercially developed, but these strategies forego the use of 11C in favor of 18F because the short half-life of 11C demands PET imaging with immediate access to a cyclotron for accurate measurements versus PET imaging alone.5 The first commercial compound to receive FDA approval was Florbetapir, but the Center for Medicaid and Medicare Services have yet to approve its coverage.5 However, that may have changed recently due to the passage of the Affordable Care Act.

One of the chief concerns about using direct Abeta imaging for diagnosis or even detection purposes is that it tends to prefer fibrils instead of oligomers. However, fibrils tend to form after oligomers and there is ample evidence to suggest that oligomers play an important role in the development and progression of AD. Therefore, not only could imaging fail to properly capture the full extent of Abeta expression, but also will lag behind identifying the actual progression of AD.

Another concern for this lag is that neutralizing Abeta strategies will have to proceed before any other negative methodology, like tau progression, accelerates otherwise treatment becomes much more difficult. For example model mouse studies demonstrated that vaccination prior to plaque initiation prevented all amyloidosis versus vaccination after initiation only eliminating about 50%.129 Another important element in diagnosis that has emerged in recent years is that approximately 1/3 of patients with clinical AD do not produce Abeta plaques in the brain, which would make differentiating between AD and non-AD (normal aging) more difficult for this method.

Another strategy to rectify the time delay for an individual between AD development and displaying symptoms of AD is to identify biological and genetic biomarkers that demonstrate significantly increased probability of AD development or currently active AD development.
Unfortunately most attempts to identify genetic biomarkers involve genome-wise association studies, which can produce erroneous results or produce more broad results with little known probabilities. For example one of the more important identified genes is CLU, which encodes clusterin (a.k.a. apolipoprotein J (ApoJ)).130,131 Clusterin is important because it is involved in Abeta clearance, inhibition and neuronal apoptosis and while it is expressed in numerous tissues throughout the body, expression is higher than average in the brain.132-134

This higher than average brain expression, especially in patients with AD, has raised hopes that clusterin could be used as an early identification biomarker for AD.135,136 Unfortunately this hope has not faired well against empirical evidence where higher levels of brain clusterin have not consistently preceded AD development.137,138 In fact as previously mentioned in the blog post linked to above, biomarker analyses in general, including meta-analyses, are plagued by larger, typically bias induced, effect estimates.

However, research has shown an increase in clusterin concentration in association with depression.138 This could explain some of the contradicting results between clusterin concentration and AD for some individuals with AD get depressed, for obvious reasons, and some do not. One of the disappointments with the failure to confirm plasma based clusterin as a biomarker for AD is the plasma aspect for the ease and efficiency of plasma testing is superior to collecting CSF.

Research on disease modifying drugs for AD has covered a lot of ground in recent years, but unfortunately unlike the existing symptomatic treatments there was yet to be a significant success. Even more troubling is that available results from the multitude of Phase III studies on disease modifying drugs suggest that a quality drug is not forthcoming. One strategy to improving the probability of developing a critical treatment is to ensure proper coordination between Phase II and Phase III studies as it is sometimes difficult to reconcile a glowing success in a Phase II study with a significant failure in the corresponding Phase III study.

Another issue that was previously discussed on this blog is studying the importance of multi-drug therapies in clinical studies. While individuals like to think of AD as an Abeta disease that later involves tau, there are numerous pathways involved in the development and progression of AD that can inflict significant cellular damage and produce neurological degeneration. Attacking and neutralizing Abeta is clearly the optimal solution, but current research implies that unless this neutralization is achieved very early in the disease progression, long before the development of symptoms, then it may not be an effective target. Therefore, it stands to reason that AD will commonly involve attacking multiple neurodegenerative pathways. However, there are almost no clinical trials involving treating AD patients with multiple drugs at the same time, clinical trials continue to be conducted with only one drug versus a placebo.

Some may conclude that this multi-drug therapy could be better consolidated into a single drug that attacks multiple targets (multi-target directed ligand design), which would make treatment less complicated from the patient’s perspective. However, such designs are more complicated from a regulatory standpoint and a biological one as the combined effects of a single drug may prove less potent in triggering each pathway versus two separate drugs, one for each pathway.

Despite significant levels of effort and research the immediate future for developing an effective treatment for AD does not seem promising. Some claim that AD research is woefully under-funded given the potential havoc that AD could bring against the healthcare system in the near future. However, the cry for more funding does not appear to be a valid response to the setbacks currently experienced in the AD research community. It may be that the focus of research must change from attempting to find a single compound that will address AD to creating a multi-drug treatment regimen and this multi-drug treatment may require investigating more indirect compounds.

For example flotillin 1 knockout mice express less Abeta and less amyloid plaques, but in levels that are not sufficient for treatment.139 However, if flotillin 1 inhibitors were paired with another Abeta therapy a therapeutic level result could be produced. Also the important of lipid rafts are somewhat acknowledged in the research community, but their important component and interactive elements are typically not investigated for future therapeutic effect. There are high hopes that adherence to a Mediterranean diet will reduce the probability of developing AD, but there have been mixed results regarding whether or not the diet provides a significant protective effect.140-143 Overall while more funding would be nice, a change in perspective regarding how to treat AD may be the most important step to producing an effective treatment.

Citations –

1. Ghezzi, L, Scarpini, E, and Galimberti, D. “Disease-modifying drugs in Alzheimer’s disease.” Drug Design, Development and Therapy. 2013. 7:1471-1479.

2. Malinow, R. “New developments on the role of NMDA receptors in Alzheimer’s disease.” Curr Opin Neurobiol. 2012. 22(3):559–563.

3. Lipton, Stuart. “Paradigm shift in neuroprotection by NMDA receptor blockade: Memantine and beyond.” Nature Reviews Drug Discovery. 2006. doi:10.1038/nrd1963.

4. Castillo-Carranza, D, Guerrero-Munoz, M, and Kayed, R. “Immunotherapy for the treatment of Alzheimer’s disease: amyloid-beta or tau, which is the right target?” Immuno Targets and Therapy. 2014. 3:19-328.

5. Gandy, S, and DeKosky, S. “Toward the treatment and prevention of Alzheimer’s disease: rational strategies and recent progress.” Annu. Rev. Med. 2013. 64:367-383.

6. Fitzgerald, S. “Two large Alzheimer’s trails fail to meet endpoints: what’s next?” Neurology Today. March 6, 2014. 12-15.

7. Soscia, S, et Al. “The Alzheimer’s disease-assocaited amyloid beta-protein is an antimicrobial peptide.” PloS One. 2010. 5:e9505.

8. Landreh, M, Johansson, J, and Jornvall, H. “Separate molecular determinants in amyloidogenic and antimicrobial peptides.” J. Mol. Biol. 2014. 426:2159-2166.

9. Last, N, and Miranker, A. “Common mechanism unites membrane poration by amyloid and antimicrobial peptides.” PNAS 2013. 110:6382–6387.

10. Kamenetz, F, et, Al. “APP processing and synaptic function.” Neuron. 2003. 37: 925-937.

11. Gilman, S, Koller, M, and Black, R. “Clinical effects of Abeta immuniza­tion (AN1792) in patients with AD in an interrupted trial.” Neurology. 2005. 64:1553–1562.

12. Jarrett, J, and Lansbury, P. “Amyloid fibril formation requires a chemically discriminating nucleation event: studies of an amyloidogenic sequence from the bacterial protein OsmB.” Biochemistry. 1992. 31:12345–12352.

13. Chapman, M, et Al. “Role of Escherichia coli curli operons in directing amyloid
fiber formation.” Science. 2002. 295:851–855.

14. Miklossy, J. “The spirochetal etiology of Alzheimer’s disease: a putative therapeutic approach. Alzheimer disease: therapeutic strategies.” In: Giacobini E, Becker R, editors. Proceedings of the third international Springfield Alzheimer symposium, Part I. Birkhauser Boston Inc. 1994. 41–48.

15. Miklossy, J. “Chronic inflammation and amyloidogenesis in Alzheimer’s disease: putative role of bacterial peptidoglycan, a potent inflammatory and amyloidogenic factor.” Alzheimer’s Rev. 1998. 3:45–51.

16. Dominguez, D, et Al. “Phenotypic and biochemical analyses of BACE1- and BACE2-deficient mice.” J Biol Chem. 2005. 280:30797–30806.

17. Fastbom, J, Forsell, Y, and Winblad, B. “Benzodiazepines may have protective effects against Alzheimer disease.” Alzheimer Dis. Assoc. Disord. 1998, 12:14-17.

18. Friedman, D, Honig, L, and Scarmeas, N. “Seizures and epilepsy in Alzheimer’s disease.” CNS Neurosci Ther. 2012. 18(4):285-294.

19. Yan, X-X, et Al. “Chronic Temporal Lobe Epilepsy Is Associated with Enhanced Alzheimer-Like Neuropathology in 3xTg-AD Mice.” PLoS One. 2012. 7:e48782.doi:10.1371/journal.pone.0048782

20. Busche, M, et Al. “Clusters of hyperactive neurons near amyloid plaques in a mouse model of Alzheimer’s disease.” Science. 2008. 321:1686–1689.

21. Hsia, A, et Al. “Plaque-independent disruption of neural circuits in Alzheimer’s disease mouse models.” PNAS. 1999. 96:3228-3233.

22. Shankar, G, et Al. “Natural oligomers of the Alzheimer amyloid-beta protein induce reversible synapse loss by modulating an NMDA type glutamate receptor-dependent signaling pathway.” The Journal of Neuroscience. 2007. 27(11):2866-2875.

23. Walsh, D, et Al. “Naturally secreted oligomers of amyloid beta protein potently inhibit hippocampal long-term potentiation in vivo.” Nature. 2002. 416:535–539.

24. Vezzani, A, Sperk, G, and Colmers, W. “Neuropeptide Y: emerging evidence for a functional role in seizure modulation.” Trends in neurosciences. 1999. 22.1:25-30.

25. Hillen, H, et Al. “Generation and therapeutic efficacy of highly oligomer-specific β-amyloid antibodies.” J Neurosci. 2010. 30:10369–10379.

26. Dodel, R, et Al. “Naturally occurring autoantibodies against β-amyloid: investigating their role in transgenic animal and in vitro models of Alzheimer’s disease.” J Neurosci. 2011. 31:5847–5854.

27. De Strooper, B, and Annaert, W. “Proteolytic processing and cell biological functions of the amyloid precursor protein. J. Cell Sci. 2000. 113:1857–1870.

28. Ehehalt, R, et Al. “Amyloidogenic processing of the Alzheimer beta-amyloid precursor protein depends on lipid rafts.” The Journal of Cell Biology. 2003. 160(1):113-123.

29. Cheret, C, et Al. “Bace1 and Neuregulin-1 cooperate to control formation and maintenance of muscle spindles.” The EMBO Journal. 2013. 32:2015–2028.

30. Mangialasche, F, et Al. “Alzheimer’s disease: clinical trials and drug development.” Lancet Neurol. 2010. 9:702-716.

31. Doody, R, et Al. “A phase 3 trial of Semagacestat for treatment of Alzheimer’s disease.” N. Engl. J. Med. 2013. 369:341-350.

32. Green, R, et Al. “Effect of tarenflurbil on cognitive decline and activities of daily living in patients with mild Alzheimer disease: a randomized controlled trial.” JAMA. 2009. 302:2557-2564.

33. Wang, J, Ho, L, and Passinetti, G. “The development of NIC5-15. Anatural anti-diabetic agent, in the treatment of Alzheimer’s disease. Alzheimers Dement. 2005. 1 (suppl 1):62.

34. Grossman, H, et Al. “NIC5-15 as a treatment for Alzheimer’s: safety, pharmacokinetics and clinical variables.” Alzheimers Dement. 2009. 5(4 suppl 1):P259.

35. Urano, Y, et Al. “Association of active alpha-secretase complex with lipid rafts.” Journal of Lipid Research. 2005. 46:904-912.

36. Wahrle, S, et Al. “Cholesterol-dependent gamma-secretase activity in buoyant
cholesterol-rich membrane microdomains.” Neurobiol. 2002. Dis. 9:11–23.

37. Wada, S, et Al. “Gamma-secretase activity is present in rafts but is not cholesterol-dependent.” Biochemistry. 2003. 42:13977–13986.

38. Lee, S, et Al. “A detergent-insoluble membrane compartment contains A beta in vivo.” Nat. Med. 1998. 4:730–734.

39. Riddell, D, et Al. “Compartmentalization of beta-secretase (Asp2) into low-buoyant density, noncaveolar lipid rafts.” Curr. Biol. 2001. 11:1288–1293.

40. Kawarabayashi, T, et Al. “Dimeric amyloid beta protein rapidly accumulates in lipid rafts followed by Apolipoprotein E and phosphorylated tau accumulation in the Tg2576 mouse model of Alzheimer’s disease.” The Journal of Neuroscience. 2004. 24(15):3801-3809.

41. Sargiacomo, M, et Al. Signal transducing molecules and glycosyl-phosphatidylinositol-linked proteins form a caveolin-rich insoluble complex in MDCK cells.” J Cell Biol. 1993. 122:789–807.

42. Kojro, E, et Al. “Low cholesterol stimulates the non-amyloidogenic pathway by its effect on the alpha-secretase ADAM 10.” PNAS. 2001. 98(10):5815-5820.

43. Nordstedt, C, et Al. “Identification of the Alzheimer beta/A4 amyloid precursor protein in clathrin-coated vesicles purified from PC12 cells.” Journal of Biological Chemistry. 268.1. (1993): 608-612.

44. Yamazaki, T, Koo, E, and Selkoe, D. “Trafficking of cell-surface amyloid beta-protein precursor II. Endocytosis, recycling, and lysosomal targeting detected by immunolocalization.” J. Cell. Sci. 1996. 109:999–1008.

45. Grimm, M, et Al. “Regulation of cholesterol and sphingomyelin metabolism by amyloid-beta and presenilin.” Nature Cell Biology. 2005. 7(11):1118-1128.

46. Gomez-Brouchet, A, et Al. “Critical role for sphingosine kinase-1 in regulating survival of neuroblastoma cells exposed to amyloid-beta peptide.” Mol. Pharmacol. 2007. 72:341-349.

47. Dodel, R, “Human antibodies against amyloid beta peptide: a potential treatment for Alzheimer's disease.” Ann Neurol. 2002. 52:253–256.

48. Safavi, A, et Al. “Comparison of several human immunoglobulin products for anti-
Aβ1–42 titer.” 10th International Conference on Alzheimer's Disease and Related
Disorders. Madrid, Spain: International Conference on Alzheimer’s Disease. 2006.

49. Klaver, A, et Al. “Antibody concentrations to Abeta1-42 monomer and soluble oligomers in untreated and antibody-antigen-dissociated intravenous
immunoglobulin preparations.” Int Immunopharmacol. 2010. 10:115–119.

50. Dodel, R, et Al. “Intravenous immunoglobulins as a treatment for Alzheimer’s disease: rationale and current evidence.” Drugs. 2010. 70:513–528.

51. Balakrishnan, K, et Al. “Comparison of intravenous immunoglobulins for naturally occurring autoantibodies against amyloid-beta.” J Alzheimers Dis. 2010. 20:135–143.

52. Loeffler, D. “Intravenous immunoglobulin and Alzheimer’s disease: what now?” Journal of Neuroinflammation. 2013. 10:70-77.

53. Smith, L, et Al. “Intravenous immunoglobulin products contain specific antibodies to recombinant human tau protein.” Int Immunopharmacol. 2013. 16:424–428.

54. Bayry, J, Kazatchkine, M, and Kaveri, S. “Shortage of human intravenous immunoglobulin-reasons and possible solutions.” Nat Clin Pract Neurol. 2007. 3:120-121.

55. Public Hospital Pharmacy Coalition: Hospitals Struggle to Access Key Blood
Products at Affordable Prices. http://www.snhpa.org/public/documents/pdfs/

56. Dalakas, M. “High-dose intravenous immunoglobulin and serum viscosity:
risk of precipitating thromboembolic events.” Neurology. 1994. 44:223–226.

57. Brannagan, T. “Intravenous gammaglobulin (IVIg) for treatment of CIDP
and related immune-mediated neuropathies.” Neurology. 2002. 59:S33–S40.

58. Duhem, C, Dicato, M, and Ries, F. “Side-effects of intravenous immune globulins.” Clin Exp Immunol. 1994. 97:79–83.

59. Brox, A, et Al. “Hemolytic anemia following intravenous gamma globulin administration.” Am J Med. 1987. 82:633–635.

60. Frame, W, and Crawford, R. “Thrombotic events after intravenous immunoglobulin.” Lancet. 1986. 2:468.

61. Kessary-Shoham, H, et Al. “In vivo administration of intravenous immunoglobulin (IVIg) can lead to enhanced erythrocyte sequestration.” J Autoimmun. 1999. 13:129–135.

62. Machimoto, T, et Al. “Effect of IVIG administration on complement activation
and HLA antibody levels.” Transpl Int. 2010. 23:1015–1022.

63. Kessel, A, et Al. “Intravenous immunoglobulin therapy affects T regulatory cells by
increasing their suppressive function.” J Immunol. 2007. 179:5571–5575.

64. Eriksson, U, et Al. “Asthma, eczema, rhinitis and the risk for dementia.” Dement Geriatr Cogn Disord. 2008. 25:148–156.

65. St-Amour, I, et Al. “IVIG protects the 3xTg-AD mouse model of Alzheimer’s disease from memory deficit and Abeta pathology.” Journal of Neuroinflammation. 2014. 11:54-70.

66. Samuelsson, A, Towers, T, and Ravetch, J. “Anti-inflammatory activity of IVIG
mediated through the inhibitory Fc receptor.” Science. 2001. 291:484–486.

67. Anthony, R, et Al. “Recapitulation of IVIG anti-inflammatory activity with a recombinant IgG Fc.” Science. 2008. 320:373–376.

68. Anthony, R, et Al. “Intravenous gammaglobulin suppresses inflammation through a novel T(H)2 pathway.” Nature. 2011. 475:110–113.

69. Anthony, R, et Al. “Identification of a receptor required for the anti-inflammatory activity of IVIG.” PNAS. 2008. 105:19571–19578.

70. Lesne, S, et Al. “A specific amyloid-beta protein assembly in the brain impairs
memory.” Nature. 2006, 440:352–357.

71. Arriagada, P, et Al. “Neurofibrillary tangles but not senile plaques parallel duration and severity of Alzheimer’s disease.” Neurology. 1992. 42:631–639.

72. Giannakopoulos, P, et Al. “Tangle and neuron numbers, but not amyloid load, predict cognitive status in Alzheimer’s disease.” Neurol. 2003. 60:1495–1500.

73. Chai, X, et Al. “Passive immunization with anti-tau antibodies in two transgenic models: reduction of tau pathology and delay of disease progression.” J Biol Chem. 2011. 286:34457–34467.

74. Boutajangout, A, et Al. “Passive immunization targeting pathological phospho-tau protein in a mouse model reduces functional decline and clears tau aggregates from the
brain.” J Neurochem. 2011. 118:658–667.

75. Craig-Schapiro, R, et Al. “YKL-40: a novel prognostic fluid biomarker for preclinical Alzheimer’s disease.” Biol Psychiatry. 2010. 68:903–912.

76. Hutton, M. “Association of missense and 5’-splice-site mutations in tau with the inherited dementia FTDP-17.” Nature. 1998. 393(6686):702–705.

77. Brunden, K, Trojanowski, J, and Lee V. “Advances in tau-focused drug discovery for Alzheimer’s disease and related tauopathies.” Nat Rev Drug Discov. 2009. 8(10):783–793.

78. Rapoport, M, et Al. “Tau is essential to beta-amyloid-induced neurotoxicity.” PNAS. 2002. 99(9):6364-6369.

79. Meraz-Ríos, M, et Al. “Tau oligomers and aggregation in Alzheimer’s disease.” J Neurochem. 2010. 112(6):1353–1367.

80. Lasagna-Reeves, C, et Al. “Alzheimer brain-derived tau oligomers propagate pathology from endogenous tau.” Sci Rep. 2012. 2:700.

81. Lasagna-Reeves, C, et Al. “Identification of oligomers at early stages of tau aggregation in Alzheimer’s disease.” FASEB J. 2012. 26(5):1946–1959.

82. Iba, M, et Al. “Synthetic tau fibrils mediate transmission of neurofibrillary tangles in a transgenic mouse model of Alzheimer’s-like tauopathy.” J Neurosci. 2013. 33(3):1024–1037.

83. Guo, J, and Lee, V. “Cell-to-cell transmission of pathogenic proteins in neurodegenerative diseases.” Nat Med. 2014. 20(2):130–138.

84. Stancu, I. “Models of beta-amyloid induced tau-pathology: the long and “folded” road to understand the mechanism.” Molecular Neurodegeneration. 2014. 9:51-65.

85. Tariot, P, and Aisen, P. “Can lithium or valproate untie tangles in Alzheimer’s disease?” J Clin Psychiatry. 2009. 70:919-21.

86. Tariot, P, et Al. “The ADCS valproate neuroprotection trial: primary effi cacy and safety results.” Alzheimers Dement. 2009. 5(4 suppl 1):P84-85.

87. Hampel, H, et Al. “Lithium trial in Alzheimer’s disease: a randomized, single-blind, placebo-controlled, multi-center 10-week study.” J Clin Psychiatry. 2009. 70:922–31.

88. Green, K, et Al. “Nicotinamide restores cognition in Alzheimer’s disease transgenic mice via a mechanism involving sirtuin inhibition and selective reduction of Thr231-phosphotau.” J Neurosci. 2008. 28:11500-10.

89. Tackenberg, C, et Al. “NMDA receptor subunit composition determines beta-amyloid-induced neurodegeneration and synaptic loss.” Cell Death Dis. 2013. 4:e608.

90. Ma, Q, et Al. “Beta-amyloid oligomers induce phosphorylation of tau and inactivation of insulin receptor substrate via c-Jun N-terminal kinase signaling: suppression by omega-3 fatty acids and curcumin.” J Neurosci. 2009. 29(28):9078–9089.

91. Mairet-Coello, G, et Al. “The CAMKK2-AMPK kinase pathway mediates the synaptotoxic effects of Abeta oligomers through Tau phosphorylation.” Neuron. 2013. 78(1):94–108.

92. Serrano-Pozo, A, et Al. “Neuropathological alterations in Alzheimer disease.” Cold Spring Harb Perspect Med. 2011. 1(1):a006189.

93. Braak, H, and Braak, E. “Neuropathological stageing of Alzheimer-related changes.” Acta Neuropathol. 1991. 82(4):239–259.

94. Hyman, B, and Trojanowski, J. “Consensus recommendations for the postmortem diagnosis of Alzheimer disease from the National Institute on Aging and the Reagan Institute Working Group on diagnostic criteria for the neuropathological assessment of Alzheimer disease.” J Neuropathol Exp Neurol. 1997. 56(10):1095–1097.

95. Terwel, D, et Al. “Amyloid activates GSK-3beta to aggravate neuronal tauopathy in
bigenic mice.” Am J Pathol. 2008. 172(3):786–798.

96. Stancu, I, et Al. “Tauopathy contributes to synaptic and cognitive deficits in a murine model for Alzheimer’s disease. FASEB J. 2014. 28(6):2620–2631.

97. LaClair, K, et Al. “Treatment with bexarotene, a compound that increases apolipoprotein-E provides no cognitive benefit in mutant APP/PS1 mice.” Molecular Neurodegeneration. 2013. 8:18-28.

98. Tokuda, T, et Al. “Lipidation of apolipoprotein E influences its isoform-specific interaction with Alzheimer's amyloid beta peptides.” Biochem J. 2000. 348:359–65.

99. Bell, R, “Transport pathways for clearance of human Alzheimer's amyloid beta-peptide and apolipoproteins E and J in the mouse central nervous system.” J Cereb Blood Flow Metab. 2007. 27:909–918.

100. Cramer, P, et Al. “ApoE-directed therapeutics rapidly clear β-amyloid and reverse deficits in AD mouse models.” Science. 2012. 23(335):1503–1506.

101. Fitz, N, et Al. “Comment on “ApoE directed therapeutics rapidly clear beta-amyloid and reverse deficits in AD mouse models.”” Science Tech. Comments. 2013. 340:924-c.

102. Price, A, et Al. “Comment on “ApoE-directed therapeutics rapidly clear beta-amyloid and reverse deficits in AD mouse models.”” Science Tech. Comments. 2013. 340:924-d.

103. Bachmeier, C, et Al. “Stimulation of the retinoid X receptor facilitates beta-amyloid clearance across the blood-brain barrier.” J Mol Neurosci. 2013. 49:270-276.

104. Saint-Pol, J, et Al. “The LXR/RXR approaches in Alzheimer’s disease: is the blood-brain barrier the forgotten partner? J. Alzheimers Dis Parkinsonism. 2013. 3:4-7.

105. Tesseur, I, et Al. “Comment on “ApoE-directed therapeutics rapidly clear beta-amyloid and reverse deficits in AD mouse models.”” Science. 2013. 340:924-924e.

106. “Hoglund, K, et Al. “Plasma Levels of b-Amyloid(1-40), b-Amyloid(1-42), and Total b-Amyloid Remain Unaffected in Adult Patients With Hypercholesterolemia After Treatment With Statins.” Arch Neurol. 2004. 61(3):333-337.

107. Holtzman, D, Herz, J, and Bu, G. “Apolipoprotein E and apolipoprotein E receptors: normal biology and roles in Alzheimer disease.” Cold Spring Harb Perspect Med. 2012. 2:a006312

108. Elliott, D, Weickert, C, and Garner, B. “Apolipoproteins in the brain: implications for neurological and psychiatri disorders.” Clin. Lipidol. 2010. 51(4):555-573.

109. Holtzman, D, Herz, J, and Bu, G. “Apolipoprotein E and Apolipoprotein E receptors: normal biology and roels in Alzheimer Disease.” Cold Spring Harb. Perspect Med. 2012. 2:a006312.

110. Rickhag, M, et Al. “Apolipoprotein D is elevated in oligodendrocytes in the peri-infarct region after experimental stroke: influence of enriched environment.” J. Cereb. Blood Flow Metab. 2008. 28(3):551-62.

111. Bharadwaj, P. “Latrepirdine: molecular mechanisms underlying potential therapeutic roles in Alzheimer’s and other neurodegenerative diseases.” Transl Psychiatry. 2013. 3:e332-341.

112. Wu, J, and Li, Q. “Bezprozvanny I. Evaluation of dimebon in cellular model of Huntington’s disease.” Mol Neurodegener. 2008. 3:15.

113. Moreira, P, et Al. “Amyloid beta-peptide promotes permeability transition pore in brain mitochondria.” Biosci Rep. 2001. 21:789-800.

114. Bachurin, S, et Al. “Antihistamine agent Dimebon as a novel neuroprotector and a cognition enhancer.” Ann N Y Acad Sci. 2001. 939:425–435.

115. Contact: An Alzheimer’s Disease Investigational Trial. http://www.contactstudy.com/

116. Horizon: A Huntington Disease Investigational Trial. http://www.horizontrial.com/index.php

117. Landreth, G, et Al. “PPAR-gamma agonists as therapeutics for the treatment of Alzheimer’s disease.” Neurotherapeutics. 2008. 5:481-89.

118. Gold, M, et Al. “Effects of rosiglitazone as monotherapy in APOE4-stratifi ed subjects with mild-to-moderate Alzheimer’s disease.” Alzheimers Dement. 2009. 5(4 suppl 1):P86

119. Dubois, B, et Al. “Research criteria for the diagnosis of Alzheimer’s disease: revising the NINCDS-ADRDA criteria.” Lancet Neurol. 2007. 6:734–746.

120. Mattsson, N, et Al. “Longitudinal cerebrospinal fluid biomarkers over four years in mild cognitive impairment.” J Alzheimers Dis. 2012. 30:767–778.

121. Zetterberg, H, et Al. “Intra-individual stability of CSF biomarkers for Alzheimer’s disease over two years.” J Alzheimers Dis. 2007. 12:255–260.

122. Jack, C Jr, et Al. “Tracking pathophysiological processes in Alzheimer’s disease: an updated hypothetical model of dynamic biomarkers.” Lancet Neurol. 2013. 12:207–216.

123. Beach, T, et Al. “Accuracy of the clinical diagnosis of Alzheimer disease at National Institute on Aging Alzheimer Disease Centers, 2005-2010.” J Neuropathol Exp Neurol.
2012. 71:266–273.

124. Shaw, L, et Al. “Cerebrospinal fluid biomarker signature in Alzheimer’s disease neuroimaging initative subjects.” Ann. Neurol. 2009. 65(4):403-413.

125. Lewczuk, P, et Al. “International quality control survey of neurochemical dementia diagnostics.” Neurosci Lett. 2006. 409:1–4.

126. Verwey, N, “A worldwide multicentre comparison of assays for cerebrospinal fluid biomarkers in Alzheimer’s disease.” Ann Clin Biochem. 2009. 46:235–240.

127. Salloway, S, et Al. “Two phase 3 trials of bapineuzumab in mild-to-moderate Alzheimer’s disease.” N Engl J Med. 2014. 370:322–333.

128. Fagan, A, et Al. “Cerebrospinal fluid tau/beta-amyloid42 ratio as a prediction of cognitive decline in non-demented older adults.” Arch Neurol. 2007. 64:343–349.

129. Schenk, D, et Al. “Immunization with amyloid-beta attenuates Alzheimer’s disease-like pathology in the PDAPP mouse.” Nature. 1999. 400:173-77.

130. Harold, D, et Al. “Genome-wide association study identifies variants at CLU and PICALM associated with Alzheimer’s disease.” Nat Genet. 2009. 41:1088–1093.

131. Seshadri, S, et Al. “Genome-wide analysis of genetic loci associated with Alzheimer disease.” JAMA. 2010. 303:1832–1840.

132. Oda, T, et Al. “Purification and characterization of brain clusterin.” Biochem Biophys Res Commun. 1994. 204:1131–1136.

133. Kim, N, et Al. “Nuclear clusterin is associated with neuronal apoptosis in the developing rat brain upon ethanol exposure.” Alcohol Clin Exp Res. 2012. 36:72–82.

134. de Silva, H, et Al. “Apolipoprotein J: structure and tissue distribution.” Biochemistry. 1990. 29:5380–5389.

135. Schrijvers, E, et Al. “Plasma clusterin and the risk of Alzheimer disease.” JAMA. 2011. 305:1322–1326.

136. Xing, Y, et Al. “Blood clusterin levels, rs9331888 polymorphism, and the risk of Alzheimer’s disease.” J Alzheimers Dis. 2012. 29:515–519.

137. IJsselstijn, L, et Al. “Serum clusterin levels are not increased in presymptomatic Alzheimer’s disease.” J Proteome Res. 2011. 10:2006–2010.

138. Silajdzic, E, et Al. “No diagnostic value of plasma clusterin in Alzheimer’s disease.” PloS One. 2012. 7(11):e50237-50241.

139. Bitsikas, V, et Al. “The role of flotillins in regulating Ab production, investigated using flotillin 1-/-, Flotillin 2 -/- double knockout mice.” PloS One. 2014. 9(1):e85217-e85226.

140. Singh, B, et Al. “Association of Mediterranean diet with mild cognitive impairment and Alzheimer’s disease: a systematic review and meta-analysis.” J. Alzheimers Dis. 2014. 39(2):271-282.

141. Sofi, F, et Al. “Accruing evidence on benefits of adherence to the Mediterranean diet on health: An updated systematic review and meta-analysis.” Am J Clin Nutr. 2010. 92:1189–1196.

142. Cherbuin, N, and Anstey, K. “The mediterranean diet is not related to cognitive change in a large prospective investigation: The PATH through life study.” Am J Geriatr Psychiatry. 2012. 20:635–639.

143. Cherbuin, N, Kumar, R, and Anstey, K. “Caloric intake, but not the mediterranean diet, is associated with cognition and mild cognitive impairment.” Alzheimers Dement. 2011. (1):S691.