The topic of diversity of workforce and in the STEM field in particular continues to be a high priority. This was a big topic in the 1980s. I think the statistics have improved since then, but the topic has not lost its priority and urgency. We need more diverse work-forces, especially in the professional (office-bound) jobs. We have still not reached the intended goals of office memberships having diversity matching that of the population as a whole. We will not be satisfied until every office group photos exhibit the right mix. In this view, team-member portraits provide necessary and sufficient visualization of the diversity of teams. Too many teams do not have favorable visualizations.
There must be more to the problem than just visual diversity. Although there remains room for improving the visualization of diversity through photographs, the active workforce is more diverse than in the 1980s. Most businesses have inclusive hiring practices that encourages hiring decisions the improves the photographic-visualization of their internal diversity. Yet, we remain at least as anxious about diversity. I think the sense of continued crisis of insufficient diversity comes from a sense that we are not yet experiencing the promised added benefits of diversity. While teams are more diverse, we lack evidence that this diversity is contributing any value-added to the performance of the teams.
Even highly successful diverse teams, there is little proof that the teams would be equally successful if they had been purely homogeneous. The lack of obvious return specifically tied to diversity is causing us to worry that we have not yet achieved the level of diversity necessary to deliver value specifically attributable to diversity. To get that return, we may need to strive for super-diversity: where the workplace diversity exceeds the diversity of the population as a whole. In particular, a model of a diverse team is one that has exactly one of each possible category where each category is combination of traits of race, sex, nationality, cultural group, and even sexual orientation. No team should have two of any particular category of a combination of characteristics.
I suspect this approach to super-diversity is counter-productive to team building for the simple reason that each person becomes aware of their representational role. They are on the team not to deliver value from their own private aptitudes, but instead to represent the value that somehow characterizes the category they belong to. The team becomes more like a legislature where each member feels that their primary responsibility is to represent their assigned category. He has to distinguish his views from the other members to demonstrate that his category is distinct. Also, he has to strive to set his positions to be consistent with the overall population belonging to the same category. Obviously, it is absurd to suggest that such diverse teams become as dysfunctional as legislative bodies, but I think there is a degeneracy within the team when members recognize they owe their participation in the team at least in part to their diversity category. This realization encourages reflection about providing representation of his group instead of presentation of his private skills and competencies.
More generally, I agree that modern teams do suffer from a lack of diversity. However, I think we are approaching diversity the wrong way. It is possible to have a diverse visualization through group photographs and yet have a very homogeneous team in terms of what is relevant to the team. Homogeneity of thought is a major risk factor for long-term success of the team. That homogeneity in thinking ultimately fails to see the problems early enough to solve.
We need diverse teams, but the diversity must be in each member’s private contributions. This is consistent with how I viewed diversity when it was presented to me in the 1980s. At that time, the value of photo-diversity was that the members brought their personal experiences of diverse background experiences. The photographic diversity was a proxy for a diversity of life experience. What really mattered (in theory at least) was this diversity in life experiences, cultural world-views, and visions of the future. We imagined that hiring people that looked different would obtain people who thought differently. I don’t think that has worked out that way. Photo-diversity can come with experience homogeneity. As a result our teams are not as effective as we had predicted.
Modern workplace culture strives for the ideals of non-discrimination of equal treatment for all workers. Where there are differences in terms of compensation and authority, the differences must trace to objective measures such as certifications, accreditation, years of experience, etc. Photo-diversity of teams reinforces the goals of equality by presenting the possibility that unequal treatment could be explained by prejudiced by the visibly apparent differences among the individuals.
Most people are eager to avoid even the appearance of discrimination in the workplace. Most people will work hard to be equitable in their work relationships with their peers. To do this, they must set aside personal observations of distinctions between individual’s abilities, talents, attitudes, etc. In any team, some people will be able to do certain types of tasks better than others. Some tasks are more valuable than others. However, a private subjective observation of these advantages is indistinguishable from discrimination based on visible characteristics, especially in super-diverse team. If each visibly-apparent characteristic has exactly one representative in the team, then any differences in how that member is treated can (and will) be interpreted as an act of discrimination based on that characteristic.
Diversity of visible characteristics (age, sex, skin pigmentation, culture, etc) set up a scenario that stifles any possibility of making personal judgments of personally unique capacities of the individuals. This is harmful in both directions. The team is unable to optimize the potential available within the team. The individual with unique capabilities is unable to realize his full potential. Even if everyone on the team privately recognizes that one individual is particularly well suited for a particular task, they must submit the task for group deliberation that distributes the workload equitably among the team. The imperative to set aside their private observations of competence forces a distribution of work that prevents the realization of excellence possible if the task were assigned exclusively to that one individual whose capabilities are best suited for the task. In super-diverse teams, there is no easy way to distinguish illegitimate discrimination based on photographic characteristics from discrimination based on informal observation of competence.
The above scenario presents stress within the team especially if the team fails to meet their objective when they privately observe that success would have been possible if they had the courage to assign the task discriminately to the most capable member. That courage involves confronting the perception that there is no outside objective means to differentiate legitimate and illegitimate discrimination. In super-diverse teams, every individual will be representative of a unique class of people. Discriminating based on an individual’s private capacities is indistinguishable from discriminating against the entire group that the individual’s photograph represents.
The bigger challenge for photographically-diverse teams is more subtle. As I attempted to discuss in my prior posts, successful human teams need to learn collectively in a comparable way that machine learning algorithms learn. Machine learning involves discriminating between numeric or categorical features that I suggested are comparable to human members of a team. The learning process arises when we allow assignments of different weights for each feature. The weights correspond to preferences, rewards, and penalties. Machine learning adjusts the weights based on observations of the relative importance of a particular feature to a particular desired outcome. I argue that success agile teams succeed when they are allowed to behave comparably to machine learning. The team learns by dynamically adjusting the network or relationships between members and the desired outcome. The team learns by discovering successful forms of discrimination.
At a higher level, data scientists practice an art of designing machine learning by tuning the features to better match the objective. The data scientist’s art includes adding or removing features, or to impose a constraint on the total collection of weights (a process called regularization). Human teams can grow similarly over time by adding or removing members or by constraining rewards and penalties to defined budget (for example). The difference is that the data scientist working on machine learning algorithms faces no impediments from his choices of features and regularization. In particular the data scientist may discover effective features that previously would have been assumed to be irrelevant. For an example of a particular process known to follow a linear relationship, the data scientist may find that the process is best modeled with a non-linear function that is allowed to curve at certain points. Because there is a limit to the accuracy of discovering true linear relationship from regression, and unexpected non-linear relationship may result in better predictions overall. The data scientist is free to choose this non-linear feature even though it is contrary to the well-established theory.
This process of feature selection is comparable to selecting and developing members of human teams. Human teams are more hampered because their only opportunity to select new members is when it is time to fill a vacancy. Even then, they are constrained to select credentialed candidates that match the requirements of that vacancy. The job opening has to be advertised with specific requirements of the vacant role. The candidate pool for this position will need objective evidence that they match that specific role. The team does not have permission to experiment with the definition of the role to be filled.
I argue that if machine intelligence eventually will outperform human intelligence, part of the reason for that success would be that humans handicap themselves by denying themselves of the flexibility to select and discriminate its memberships (corresponding to features). In modern business practice, human teams must conform to prior expectations of the makeup of the team. In particular, the modern teams do not have the opportunity to inequitably distribute the workload or rewards to the more capable members. Human teams do not have access to techniques comparable to sparse weighting or feature pruning.
Perhaps a better analogy in machine learning terms is that human teams must operate with heavy regularization terms that severely restrain the specific contributions of individual features (team members). Modern team management practices (including most agile teams) effectively imposes a huge regularization term (lambda) on the cost function in order to achieve the goals of achieving photographic diversity of the team membership. Personally, I assume that the sociological goals for equality and diversity are important and may even be more important than performance potentials of teams. However, I note that these goals come at a cost of restraining the potential intelligence of human teams, especially when compared to machine intelligence unburdened by such equality concerns about the features it selects or emphasizes.
Team intelligence comes from interactions within the team. The current intelligence of the team comes from the experiential diversity of the team. It is possible to have photographically-diverse teams with homogeneous experiences. For example, a new project may hire exclusively from one graduating class from an exclusive university and achieve the desired visible diversity while obtaining almost no diversity of experiences. I argue that such teams have a relatively lower intelligence than teams consisting from different schools and graduating at different times. What matters is the diversity of personal experiences of the individual team members, not the experiences of the broader populations represented by that individual team member. Even if the social-group experiences were relevant, the individual will have access to only a tiny fraction of the experiences we expect him to represent. The genome itself is not a communication channel for sharing intelligence among living humans.
Our emphasis on diversity implicitly pressures us to avoid any appearances of discrimination within teams. Because the teams are photographically diverse, discrimination based on value-offered is indistinguishable from discrimination based on the group represented by the individual. This is especially true at the informal level of relationships inside the core team. Discrimination must be justified by non-controversial certifications, degrees, etc.
In practice, the only opportunity for discriminating based on value-offered is the one-time hiring process. The ideal hiring process considers objective documentation of past performance to admit the desired capabilities. When the hiring process is completed, the opportunity to discriminate on capability effectively ceases. The now-hired must be treated equitably to respect the group the individual represents. Hiring managers replaced an early notion of firing managers: we only permit discriminating talents in the brief window of making a hiring decision.
The analogy of machine learning presents a lesson that collective intelligence can emerge by dynamic application of discrimination between features when confronting new problems. Machine learning’s recent successes in out-performing human capabilities should alert us that it is following a superior model for team-learning. Machines are not hampered by a need to preserve or respect superficial diversity. On the contrary, it excels by exploiting its advantage of being free to discriminate between features. If machines become smarter than humans, one reason is the handicap that we imposed on ourselves. We do not permit our teams to discriminate.
The prevented discrimination is what could occur within a team during a particular task. The team approaches the task’s objective under the restraint of avoiding the appearance of inequitable treatment of the team members. The ideal for teams is one where the individual contributors are indistinguishable in terms of value added toward the objective. For example, one of the lessons of queue optimizations is to deliberately hold back high-performing members in order to facilitate faster delivery of value. In that earlier post, I described the demonstrated “penny game” that illustrated that the job got done faster by treating all of the team members the same. Implicit in that demonstration was the tolerance for poor performance in the scenario that allowed queues to build. When the process encountered a slow performer, there was no option to discriminate by penalizing the poor performer or by providing incentives for higher performer. We force our teams to utilize the teams as they exist, handicapping the high performers for sake of equalizing the team to the slower performers. Machine intelligence algorithms have no comparable constraint. The algorithms can penalize unhelpful features, or even eliminate the features entirely.
The missed opportunity for modern teams is the prevention of an emergence of a team intelligence by discriminate allocation of rewards and penalties to the team members according to the relevance of their capabilities. In practice, this may require sidelining certain individuals who are not performing well. This sidelining may result in assigning more workload on the higher performing individuals. In analogy to machine learning, some features will end up with high positive weights and others with low negative weights. At the end of the training period (or agile sprint period), it is obvious that some features (or persons) provided more valuable contributions than the others. Over multiple such periods, the effective learning strategy may prune out the less relevant features (or persons).
Humans are not mathematical features. Human team members can observe the differences in allocations of rewards and penalties. One response to these perceived differences can be motivation to adapt one’s value offering to be more like the the ones being rewarded and less like the ones being penalized. This happens naturally when the rewards and penalties are visible. The natural learning process is teaching by example. We consider teaching by example to be essential to educating children. Machine learning also uses teaching by example. Team learning can work the same way. Teaching by example is an effective strategy for learning. The advantage humans have over machines is when we provide explicit feedback of rewards and penalties to encourage adaptation of the members.
Effective teams do come from the diversity within the team. However, the diversity emerges from experience of repeated attempts at reaching some goal. Such diversity can not be engineered by selective hiring processes. Instead the diversity emerges as the assembled team tackles new problems. Ironically, in our modern hyper-sensitive culture, the more successful emergent or learned diversity is more likely to occur in teams that start off blind to any differences between the members. This happens with teams that are photographically homogeneous, or more ideally with teams that are blind to their photographically-apparent differences. The modern conceit emphasizes and exaggerates superficial diversity and this results in a suppression of judging individual performance relative to goal in order to avoid the appearance of represented-group discrimination. The excessive sensitivity to superficial differences prevents us from building effective teams. As a result, we permit machines to take over our jobs.
Our excess sensitivity to equitable treatment of physically-identifiable groups is creating the perfect conditions to invite machines to take over the jobs. We prefer to assign to machines those tasks where effective performance requires learned discrimination based on performance.