On congress using CBO to deceive the public about the Affordable Care Act

I have been following the recent controversy of candid remarks of one of the architects or key advisers to the Congressional and Presidential crafting of the Affordable Care Act (ACA).  The ACA continues to attract my attention long after it passed.  This controversy involves quotes by Jonathan Gruber suggesting that there was a deliberate attempt to deceive voters and their representatives to get the law passed.

Coincidentally, these quotes comes out in a politically charged time of a lame-duck session following a mid-term election that significantly changes the balance of political party influence in the legislative houses.  Also, coincidentally, the ACA laws’ annual open-enrollment has just started for enrolling uninsured (with no limitations due to pre-existing conditions) as well as re-enrolling those who enrolled last year.   Those using the exchanges get to see the changes in premiums and terms for policies.

This recent controversy also relate to several points I have already been discussing in this blog site.  In particular, Jonathan Gruber is an economist whose role in drafting ACA was to use data and models to justify the legislation and to set parameters.   He developed a simulation model used by Congressional Budget Office (CBO) for scoring legislation and his role in that developing that model has earned it the name Gruber Microsimulation model (GMSIM).   Although he later worked outside of CBO, he used the same model to predict the results CBO would get when they used the same model to evaluate particular provisions of the legislation.

This scenario nicely illustrates a caution I raised in an earlier post about the dangers of relying too much on automated predictive analytics (in big data contexts).   The risk is that an adversary can discover this reliance and then manipulate the data for adversary’s advantage at the owner’s expense.  In this case, Gruber manipulated the data (legislation provisions) in just the right way to obtain from CBO a prediction that would be politically palatable for passage.   In effect, he engineered a way to undermine the CBO’s independence and non-partisan reputation because he knew they were obligated to use certain simulation models that he understood very well.   At least for this particular legislation, Congress was able to make irrelevant CBO’s role as an honest non-partisan analysts of legislation.   CBO merely confirmed the simulation results already computed by Congress.   CBO’s analysis was reduced to confirming simulation results instead of having an independent contribution.

CBO really had no choice but to repeat the same analysis because they had adopted the approach I described in earlier post as an obligation of the decision-maker (in this case the CBO) to accept the approved analytic model (in this case the GMSIM).  In that post, I suggested that decision makers may as well be automated because we don’t allow humans to deviate from the recommendations from the established models.   In this case, CBO’s scoring of the legislation was automated because they had to report what the GMSIM (and other models known to Congress) would produce.   This obligation to follow the model’s recommendation specifically denies the decision maker to make his own decision contrary to the analytic results.

In that post, I pointed out that without this obligation to follow the model’s recommendation, the decision makers will be free to draw upon his experience to include consideration of his informed doubts and fears.   We select seasoned decision makers because we want people who have developed sophisticated appreciation for doubts and fears, and we want people with those appreciations because we want them to consider their doubts and fears in their decisions.

Once we demand obligation to follow the results of analytic models, we forbid the consideration of any doubts and fears, not even those from seasoned experienced decision-makers.   We may as well automate the decision making.  In the case of ACA, it appears Congress did indeed automate the CBO.   The CBO had no opportunity to raise fears and doubts informed by their nonpartisan reputation.

In that post about obligated decision making, I defended the use of informed doubts and fears in decision making.   The defense centered on accountability of the decision-maker.   A decision-maker makes decisions.   A decision is a single choice among multiple alternatives.   That single choice is the one we will experience.   Every decision has consequences, and some of those consequence will result in injuring some group.   The injured group will demand accountability of the decision-maker.   Accountability requires that the decision-maker provide a persuasive argument for the reasoning of behind his decision.   He has to convince the aggrieved that not only did he consider the evidence, but he also considered reasonable doubts and fears of being wrong.   If he can’t convince the aggrieved, he must instead convince a super majority to agree with this decision and thus marginalize the aggrieved.   Faced with this prospect of having to defend a decision in light of adverse consequences, a free decision maker will demand to consider doubts and fears in addition to the evidence.  A free decision-maker can not be obligated to follow a computer model’s recommendation.

An accountable decision maker is able to defend the consequences of his decision.   When the decision is obligated, his only defense is that he had no choice but to follow the accepted models at the time of the decision.  This is a weaker defense than being to build an argument that the specific doubts and fears were explicitly considered and indeed these doubts and fears included what actually happened.

The ability to include consideration of doubts and fears in an argument can be more persuasive to an audience, and especially an aggrieved audience.   This is demonstrated in our judicial system that considers accused are innocent until proven guilty beyond a reasonable doubt.  The validity of the judicial system collapses if we don’t permit the existence of reasonable doubt.   Doubt may be broader than just the strength of the evidence.  Doubt includes the credibility of the different advocates, or their motives.  Eliminating the opportunity to consider doubts and fears makes the decision automated based on the evidence alone.   The decision maker’s only recourse for defending a purely evidence-based decision is that the models gave him no choice.

The CBO’s current defense for their approval of the law is that their models gave them no other option.   Their reputation for nonpartisan impartiality rests on their reliance of approved computing models excluding any human inputs of doubts and fears.   Human decision making through argumentation can be convincingly impartial (as illustrated by our judicial system) when it involves rigorous argumentation using non-fallacious techniques.   Obviously the advocates of the competing arguments will be biased, but the final decision comes from the jury of peers who demand that the winning argument overcomes their reasonable doubts.   In the case of the CBO’s scoring of the ACA, there was good reason to have doubts.  In particular, the CBO knew that the drafter’s had access to the same models and designed the legislation specifically to obtain an acceptable modeling result from the same model.   It should have been obvious to the CBO that their independence was at risk by using the exact same model.   They proceeded to use the same model with the result of merely confirming the results the legislation was designed to produce.  The CBO’s impartial simulation operators ran the same simulation and got the same result.  We gained from CBO a replicated analysis by an independent group instead of an independent analysis.

If the models were that well trusted, the CBO’s contribution was unnecessary.  They only provided the impartial operators to run the same simulation previously run by the drafters of the legislation.   Both used essentially the same simulation.

The congress created the CBO to provide an impartial review body to evaluate legislation on its merits.   Given that description, my expectation would be that the CBO would initiate new analysis when provided a new proposed legislation.  This would include fresh research to obtain new information for the specific details of the legislation.   I imagine a scientific process where a new hypothesis (in this case legislation based on prior knowledge) is followed by a fresh experiment to obtain well-controlled data designed specifically to test the hypothesis.   The testing of a hypothesis needs new information from the information that formed the hypothesis.

Instead of a fresh experiment, the CBO repeated the hypothesis-generating analysis to produce a hypothesis-test result.  This was emphasized by the short amount of time that the CBO took to produce their results: a few hours or days after provided the legislation.  To me, this does not appear to be enough time to create a new experiment to obtain new data.  They used the same data that formed the hypothesis (the legislation) to verify that the hypothesis was correct.    Given that circularity, the CBO’s role was superfluous.  CBO’s analysis added nothing to the legislation.

Actually, the CBO did add something to the broader political debate.   The stated reason for CBO’s existence is to provide an assessment from an group that is independent of congress and politics.   The CBO’s role is meant to gain confidence of the voters by providing a nonpartisan impartial analysis of legislation.  Congress used CBO’s results for political purposes of convincing the public of a fair impartial analysis of the legislation.

In this particular case their results were predetermined by congress using the same models, and apparently designed to deceive the public about specific provisions.   Because the CBO was constrained to use the same model and within tight schedules that permitted only the rerunning of that model, the CBO added nothing new to the legislation.  Congress had full control of the CBO’s results to the extent that the process could have been automated.  There really is no value added to having a CBO at all.  CBO unwittingly became part of the coalition to advocate for the new legislation.

What makes Jonathan Gruber’s statement about the stupidity of the American voter so noticeable is the idea that congress deliberately designed the legislation to deceive enough of the politically-active population to get the legislation passed.   This deliberate deception included a calculated manipulation of CBO to obtain results that confirmed the legislation would meet the financial and coverage goals.  The CBO ceased to be an accountable decision maker.  It missed its opportunity to call out this deception.

In earlier posts, I discussed the importance of a democratic government to maintain a super-majority consent.   There will always be a small minority (or a few groups of small minorities) who would object to government and withhold their consent.  This minorities will agitate for reform or replacement of the government.   The government can survive these complaints as long as it has super-majority support of people who not only don’t object to being governed but actively support the government against the minority insurrections.   I distinguished super-majority from majority rule.  A super-majority includes the groups not power but who agree to be ruled by the simple majority.   The simple-majority makes the rules, and the super-majority agrees to follow the rules.

As we enter a new era with much larger government that is governed by larger data sets, the population needs to adapt to stay informed.   I described this change in the model of citizenship as a dedodemocracy that I meant as government by data.   I suggested it is possible to have democratic participation in a dedodemocracy but only if the people adopt a new approach to government.  In a dedodemocracy, the bi-annual elections become irrelevant.  To regain a role in participation, the population must become fluent in data science, including both how to use data tools to query data and how to interpret data for themselves.   I discussed we need to begin immediately to restructure our education system to prepare people for a data-driven government.   Our government increasingly will be driven by decisions based on data.  In order to continue to participate demographically in government to build a super-majority consent to government, the population needs to become fluent in data, how to query it (including analytics and visualization) and how to interpret the results.   This goal of democratic participation requires that the government play fair in terms of sharing the data and to clearly describe proposed policies so people can evaluate it on their own.

The offensive notion in the Jonathan Gruber’s remarks was that congress deliberately obfuscated the law to deceive the public.  To the extent that is true, it may be justified by the present day notion of democratic government where participation is limited to voting and petitioning the representatives.   If we approach government more as a dedodemocracy, this kind of deception must be forbidden.   People will need the ability to see the impact of the law for themselves.  They will need access to the data and the models.   They will also need to understand the policy so they can properly interpret the validity of the models and the appropriateness of the data.   A data-informed population participating in democratic government provides a non-coercive way to obtain super-majority consent.

The old model of accountable decision-making involving trusted decision-makers to make his own decisions describes the old approach to obtaining super-majority consent in a non-coercive manner.   The necessary elimination of human accountability to enable purely evidence-based (data-driven) decision making will require a coercive approach to obtaining a super-majority by means of obligating the population to participate.  Evidence-based (or predictive-modeling based) obligated decision making can provide the rationale for obligating the population to participate despite the lack of human accountability.   The immediate decisions may result in injury but the injured population can take comfort in the fact that their suffering provides new data to obligate future evidence-based decision making.   This approach to government may also be facilitated through a data-informed public who have the data skills to follow the progress of new data making better policies.

Evidence-based government can be sustained through coercive means.   The ACA includes coercion through its mandates and taxes, and also through its deceptions such eliminating tax-deductible employer-provided health insurance coverage (tax on so-called Cadillac plans).  ACA succeeded by convincing enough people that the decisions are dictated by models judged to be accurate by impartial bodies such as the CBO.

I do not know much about the micro-simulation models such as the ones used for ACA.   Given my motivation to follow this topic and my interested in data and simulation, I plan to learn more about it.   Many of my blog posts focused more on analytics of big data where the data includes large amounts of individual objective observations.   Microsimulation appears to be a form of analytic tool (specifically, a simulation) for economics.  The microsimulation uses economic data.  My impression of the economic data available to CBO and others is large but not nearly as large as data sets we encounter in modern big data systems.   Economic data tends to be summarized survey results or aggregate economic data for specific demographic groups in specific geographic regions or affected by specific classes of industries.  The data fed to a microsimulation model is unlikely to be individual measurements of hundreds of millions of health-insurance subscribers or of trillions of health care transactions.   Instead, I suspect microsimulation model uses for input data the results from other models for the various components of the population.   I agree that modeling and simulation is a key part of decision making, but they should use direct observations for input.

Introducing model-generated data (including aggregations into preconceived categories) can bias the simulation to confirm our suspicions instead of informing us of what is really happening.   Using model-generated data as an input to simulation is what I call dark data.  Dark data is model-generated data but it instead of directly supporting the decision-making like microsimulation does, dark data becomes inputs to those simulations.   I am suspicious of dark data as inputs to simulations.   It is likely to mislead us because it can only confirm our pre-existing theories.  Dark data provides no new information about the real world.

In an earlier post, I predicted that some day someone will bet everything on a predictive model and lose everything.  The loss will forever poison the concept of obedient following of evidence-based decision-making.   The decision will be something major like passing the ACA.  Congress promoted ACA as evidence-based and this interpretation was approved by the CBO.   The ACA may provide that example of the decision that loses everything.

4 thoughts on “On congress using CBO to deceive the public about the Affordable Care Act

  1. Pingback: Reforming government with decimal system for government work product | kenneumeister

  2. Pingback: Orphanocracy: a government by decisions accountable by noone | kenneumeister

  3. Pingback: Orphanocracy: a government by decisions accountable by noone | Hypothesis Discovery

  4. Pingback: Reforming government with decimal system for government work product | Hypothesis Discovery

Leave a comment