In my last post, I outlined an approach to reform federal government agencies by requiring extensive record keeping of actual activities performed by each government staff during their assigned duty hours. I argued that this additional book-keeping burden is made possible by recent productivity-improvements of information technology. In addition, this additional burden is comparable to the intrusive regulation that the Federal government has imposed on many industries.
I gave the example of the mandate on healthcare to use the very detailed ICD-10 coding system to describe diagnosis and patient intakes. My recommendation was to use a similar coding system for all government-staff activities but where the diagnosis is replaced with similarly precise distinctions of work products. The goal of the coding would be to facilitate data queries such as subpoenas and FOIA requests to be automated by code where the provided data would be every relevant piece of information and nothing but relevant information. Diligent application of precise codes to break out work-products into indivisible units will permit automated queries without the need for human review and redaction of irrelevant or sensitive information.
For quality control reasons, we should demand that each staff account for each time unit (such as each 15 minutes) with one or more codes for the work-products performed during that period. This will provide the added benefit of evaluating performance across many staff and over time. This can also help to address performance problems by relating compensation to actual work performed.
The goal is to collect precise data that can support public querying of data. This query may be restricted to aggregated or categorized data but it will still be rich enough to allow anyone to explore extensively the data of government operations.
My suggestion does not do result in any reform to any mission of any part of government. With this proposed reform, every part of government will continue to do what it has been doing. All this reform asks is that they more diligently document exactly what each staff (civil service or contractor) is working on for every 15-minutes (for example) of their duty day. With this requirement in place, we will still expect the same work from the agency as they have been doing. This requirement merely adds the burden to document what the kind of work they are working on during their work-days.
In some ways, this is a chance to impose on government the tighter regulations in performing duties that the government imposes on industries. Based on the experience in non-government industries, the government should be able to adapt to meet the new challenges through improvements in technology, staff training, and demands on staff performance. The lesson in industry is that this additional workload is a more useful way to take advantage of productivity improvements from technology. The alternative is to allow staff to have more idle time or to cut staff positions.
The intention of this new regulation is to provide a new good on top of their regular work product. That new good is the production of structured and precise data about individual work products. This data can support analytics tools that can be accessible to the public to observe the workings of governments. Also this precise coding can allow for retrievals of specific work-products through FOIA or subpoena requests where the processing of the requests can be automated without need for human review or redaction.
The fact remains that this reform does not change anything about what the agency actually does. Each agency will continue to do whatever it has been doing. All that will change is that they will produce more data about how they go about doing their jobs.
In the last post, I described the impenetrability of reforming NSA practices. This demand for them to extensively code their staff’s daily output of work-products does not change their mission in any way. They will still do whatever they are doing whether we agree or disagree. Also, the newly collected public data about their daily workload will be about aggregate categories of work-products: for example distinguishing writing emails from performing an investigation. The data will tell us about the creation of the email and the category of purpose for the email. This new data will not inform us of the specific content although the category may allow for efficient retrieval of content when permitted to be released through a FOIA query.
I suggest that this regulation of coding each labor activity will result in real benefits.
The benefits I mentioned already is better public visibility into the nature of tasks performed by each staff and faster public access via FOIA requests. In the case of visibility into workload of staff, it will be informative to know what tasks are performed within a particular job category. For example, we already know that an agency has people assigned to be computer programmer specialists. The addition information will tell us how these specialists spend their days: performing computer-science related tasks, attending meetings, performing mandatory training, or idly browsing the Internet. All of these activities will be coded. Quality control involves the demand that codes be applied for every time period of the day. Periodic analytics will evaluate time spent on a particular activity is comparable to the norms collected over time and over the population of peers.
The larger benefits of this record of work-products will be similar to the benefits of regulation on industry. The additional burden of precisely accounting through assigning precise codes for each activity will require the staff to think more seriously about the task being preformed. The coding requires thinking to choose the best code (or multiple codes) to capture the task performed. This thinking will also alert the staff that this record will be used to evaluate how consistent his productivity for the task is with his peers and with history. The requirement to perform this coding will force the staff to consider their tasks more seriously.
I think about last year’s reports of findings of NSA analysts abuses of using the resources to snoop on their lovers (loveint). My suggested regulation does nothing to prevent this from occurring. The coding does not expose what the staff is snooping on. All the regulation requires is that the analyst select the correct code for every activity performed during working hours. Because incorrect coding would be basis for disciplinary action even without a finding of wrong doing, this effort will discourage abuses or frauds.
In my last post, I proposed that the coding be similar in nature to ICD-10 codes in that the code describes not only the nature of the work, but what external condition made that work necessary and whether the task was an initial task or a followup of an earlier task. Even without any information about the actual work being preformed within NSA, we could observe anomalies such as a department’s reported assignment of new tasks with the sum of each of the department’s staffs’ recording of starting new tasks. The sum of the staff’s codes for new tasks should match the number of new tasks that the department actually handled. Attempts of obscuring the nature of the task could also be detected from summary data not matching up.
In this concept of coding work products, there would be a separate NSA job code for “loveint” queries that must be used for this type of activity. Obviously, no one is going to use that code, but because the activity is possible its code must exist. (Note that ICD-10 has plenty outlandish codes, too). If an NSA staff performs a “loveint” task, their attempt to hide their intention behind another code would require an conscious decision to commit fraud. The regulation would mandate a job-termination for such fraud, comparable to the penalties experienced in industry for fraudulent record-keeping. The requirement to code all work-product activities may provide the strong disincentive for wrong-doing that we would otherwise try to explicitly codify in new congressional legislation.
The detailed coding of all staff work-products produces the data to allow NSA (and external investigators appointed by congress) to discover the anomalies that suggest improper use of the coding.
When I first heard of the “loveint” abuses, I felt this was an announcement by NSA to demonstrate that they do monitor and discipline its staff for abuses. I felt this example was a distraction because the abuse is so readily recognized by the population as something that can happen. The real complaints about possible abuses are far more troubling. In my last post, I mentioned the failure of the Senate to advance legislation that would make fundamental reforms on NSA’s mission. Those reforms were meant to restrain it from much more important issues than individuals snooping on their love interests.
The lesson from the failed attempt a legislation is that it is very difficult for congress to do anything to change any agency’s mission after the agency is created. We have very little (if any) democratic control through congress to redirect the agencies. Congress is as powerless to control the day-to-day business of federal agencies as they are powerless through legislation to dictate day-to-day business of the industries (such as finance and healthcare).
In general terms, congress regulates industry primarily by imposing additional burdens for documentation and procedures on top of whatever the industries do naturally. The documentation requirements are enforced with legal penalties for fraud or neglect. I am suggesting a similar approach to regulate all federal agencies. This approach allows the agencies do what they are doing naturally, but it demands that they document more carefully the work they are performing and the justification for why they are performing them.
The documentation of how they are using their resources will permit congress and the public to have greater visibility into what is going on in an agency. This information can help satisfy the public’s concern that proper procedures were followed in the agency’s evidence-based decision making. It may also help the public to anticipate those decisions or perhaps how to influence those decisions. The data available to the public fits my concept of dedodemocracy that will obtain non-coerced super-majority consent of unaccountable decision making by allowing the people to observe the data that went into that decision.
The democratic participation in government can be restored in part by the government providing more information about what the government staff are working on each day. Although the agency’s decisions may be strongly evidence based and thus outside of control of democratic process, greater visibility into the agency’s process to come up with the decision can help to convince the public that decisions followed good practices. To illustrate, allow me to imagine a scenario where this data collection were available in the soon to be approved EPA rules on CO2 emission standards for coal-fired power plants.
This ruling is based on an interpretation of the Clean Air Act from 1970 to consider CO2 as something it can regulate. Although this interpretation is controversial, the Supreme Court has upheld it.
For this post, I want to consider the credibility of the agency’s process to introduce these new rules. These rules have progressed using the old models of publishing the proposed rules, allowing time for public responses, and considering these responses for a final ruling. What is missing from this process is any visibility of the internal processes involved in constructing the rules, evaluating the responses, or in revising the rules.
The propose regulation would require that all staff in government to record their daily work efforts using codes specific for the individual tasks they perform each day. In particular, all of the staff in the EPA would record their daily efforts. As noted above, part of the code describes the work product (attend meetings, conference calls, e-mails, statistical analysis, etc) and another part of the code describes the reason for the effort. In this case the second part of the code would include information identifying it as being related to the proposed ruling for CO2 emissions. This second part of the code will probably have other justifications or these other justifications would require applying multiple codes to the same work product (again consistent with the ICD-10 practices in healthcare).
With this information in place, we will have a data record of what everyone has been working on throughout the agency so that we can observe all of the efforts related to this particular rule. As noted, this record does not expose the actual content of the work product (such as what was discussed in an email) but it would capture the fact that the work product involved a certain action (such as constructing an email) and this action was in support of this effort to advance this rule.
With this data trail we have new visibility into the internal work processes within EPA. The data trail does not expose the actual internal deliberations, although that information may be available from a more specific request such as a FOIA request. For this post, I’m considering only the information of the labor spent on a particular kind of work product related to the new ruling, This information about the work-product would inform us that it was performed at a specific time, required a certain amount of time, and performed by a specific individual. For data analysis purposes, the identity of the individual may be protected, but we should be able to know his role and grade-level.
When the ruling comes out, it is inevitable that there will be some concern about whether the ruling was fairly arrived at. In particular, this ruling will subject very significant costs on coal fired electric power plants and may result in having them closed down. This may consequently substantially raise power bills for large populations. This population will demand assurance that the rule came from a fair application of reasonable practices for evidence-based decision making.
The data trail will make possible analytics that can reinforce confidence in a fair rule-making process. For example, the codes will be used at various times and frequencies by various individuals within the agency. This time and intensity pattern of codes may be compared with historic data from similar rule making to show that there was a comparable level of diligence at each stage of the process.
For example, one work-product may be an internal review meeting. For a reasonable process, this work-product should be proceeded with multiple work-products showing each individual preparing for this meeting. We can question whether particular participants were properly prepared for this meeting if we observe some deficit of preparatory work-products compared with comparable review meetings. Alternatively, we may question whether the meeting was biased if we observe unusually large investment of preparation for a particular component of the ruling.
With this review of the detailed pattern of all of the work products, the public can gain confidence that the rule making was fair if that pattern matches other rules that are less controversial, or at least less controversial to this particular group of the public.
Another way to use this coded data is to compare the timing of work-products with other news events such as political debates or campaign announcements. Although the codes do not review any content of the work products, their coincidental timing (such as a burst of phone calls coinciding with the timing of a particular presidential campaign speech) may suggest a suspicious collaboration with the executive office.
To obtain this kind of visibility, the first thing we need is to begin diligently and precisely coding individual work-products for everyone working in government. This data will make possible the secondary analytics. The above examples of analysis require sophisticated data science skills to perform analytics and to interpret the results to reach conclusions about the patterns and to verify the statistical tests on those patterns to determine whether the ruling was fair or not.
In my view of a future dedodemocracy, I imagine that these data-science skills will be taught throughout grade school and become as common as we currently expect citizens to be able to read, write, and perform mathematics. This kind of data science will be an essential skill to participate in their dedodemocracy. The combination of this common data-science competency among citizens plus the requirement for all government staff to code their all of the daily work-products will permit the population to conclude that the government is using acceptable diligence and fairness in coming up with new obligatory evidence-based decisions.
Being able to observe the inner workings of government can provide a relief to the population that the government is behaving reasonably to earn their non-coerced consent to be governed.
Pingback: Useful Activity Tracking of government work requires flexible coding | kenneumeister
Pingback: Workforce participation in activity tracking: addressing frustrations with micromanagement | kenneumeister
Pingback: Orphanocracy: a government by decisions accountable by noone | kenneumeister
Pingback: Big social data obligation to tell stories requires coercion, big data invites reconsideration of torture | kenneumeister
Pingback: Dedomenocratic Party: addressing entitlement unfunded liabilities | kenneumeister
Yesterday I found this site that presents in a single portal the usage of a wide variety of government web-sites by the public. The list of sites included is impressive but is not yet exhaustive.
However, the site shows near real time and historical totals for public use of a wide variety of sites. This is a good step in providing a consolidated view that is easily presented to the general public.
In the above discussion, I described the need for more visibility of how the government bureaucrats are spending their time. This existing website can be a prototype for future implementations to expose the web usage by government staff and contractors. In particular, I would suggest two additional implementations of an analytics capability:
For both sites, the reports will show the name and domain of the sites and show the real time and historical totals (and trends) for usage of web-sites. The difference is that the users tracked are specifically from .gov clients and presumably doing their job duties. An additional level of detail should aggregate the traffic by the different parts of the government hosting the clients.
While not as comprehensive as the activity trackers I describe above, federal worker devote a significant portion of their daily duty time on internal and external web sites. Public visibility of this usage would be very valuable to improve public’s participation in government.
The government can see the popularity of public-facing government sites. There should be a complementary capability for exposing analytics of internal government-only websites.
Addendum, a writeup is available.
This is outstanding given the scope of the final product.
Pingback: Orphanocracy: a government by decisions accountable by noone | Hypothesis Discovery
Pingback: Useful Activity Tracking of government work requires flexible coding | Hypothesis Discovery
Pingback: Big social data obligation to tell stories requires coercion, big data invites reconsideration of torture | Hypothesis Discovery