There’s a philosophical concept called the Great Man Theory that suggests history is all about how significant individuals act as centers of gravity for society as a whole, think Alexander the Great, Napoleon Bonaparte, Queen Elizabeth I, or the founding fathers of the American Revolution.Recent research suggests that cybersecurity and related professions are developing a “Great Machine” problem of sorts, a belief that artificial intelligence has the power to replace human agency. This belief threatens to minimize the impact of training, institutional culture, and other factors that have traditionally helped workforces navigate disruptive innovation. The findings are quite worrisome. A willingness to accept and inject grand, singular assumptions about emerging technologies into operational decision-making clearly encourages ignorance of technological nuance. The result is a serious challenge for cybersecurity practice that requires more sophisticated solutions than simply raising awareness of AI and building out training opportunities.Fortunately, having such a singular problem opens the door to potential solutions, many of which CISOs should consider sooner rather than later to maximize the downstream effects of a workforce already headed down different paths of seeing AI as a replacement vs. something merely augmentative. The Great Man idea holds that leaders with singular moral and intellectual qualities are able to drive historical effects and define hallmark events like great battles or political transformations.The theory has serious implications that clash with modern psychological and business thinking about how people behave in organizations both big and small. Philosophical qualities like individual nobility or virtue become as meaningful as broad social forces for defining the propensity for massive change.The timing of major shocks to society, such as pandemics, war, or economic downturns, is meaningful only insofar as it informs how leaders act. And other social trends, like the emergence of novel technologies of nationalism, are seen more as themes that define moments than as drivers of gradual transformation.While it might be funny to make a joke here about the Great Man theory still dominating some executive hiring today, the reality is obviously that it’s is a tired trope in in the 21st century. For more than 100 years, experts of all kinds have only really referenced the idea of leaders as singular centers of historical gravity as a straw man, something easy to critique and for which there are alternatives with obviously greater analytic value.
AI has become a proxy for the Great Man
That being said, the idea of monolithic agents of societal change remains surprisingly present in both popular and professional thinking. Specifically, technology is often given singular attributes that explain the world of yesterday and tomorrow.Television, for example, transformed the shape of war in the 20th century by bringing conflict directly into the home. And the internet is often offered as an explain-all for everything from pro-democracy movements in the developing world to rising exposure to conspiracy theories for the average citizen. In both cases, of course, the oversimplification is almost as extreme as the idea of great men and women defining history.In some ways, it perhaps shouldn’t be surprising that views of technology as monolithic, what we might think of as a Great Machine Theory of viewing history, persists in both professional and public discourse. Disruptive technologies like the telegraph, the computer, or the internet are complex assemblages of national and global infrastructure, leading to simplification because functional details feel inaccessible to anyone who’s not an expert.Complex technologies can mean different things for different people in a way that is much more difficult to sustain with leaders, particularly in the modern age of broad information accessibility. AI is often cast in exactly this way. True, it feels curious that uniquely dynamic technologies like the internet or, increasingly, artificial intelligence (AI) are tied to such simplification. After all, these things are anything but one-dimensional.True complexity makes it harder to call something like AI, which could mean algorithms, consumer products, robots, and more, a singular thing. Indeed, this logic often prompts planners and security decision-makers to avoid simple assumptions about AI. But the reality is that recent research clearly shows that basic underlying ideas about what might be have substantial impact on what is prudent and what is possible in practice.
It matters whether cyber pros see AI as replacement or augmentation
Recent experimental research that directly surveys professionals in cybersecurity and related areas of homeland security practice tells us that it matters how stakeholders perceive AI as a transformative phenomenon matters.In a series of studies, researchers assessed how the level of AI involved in a crisis situation (specifically a national cybersecurity crisis with severe potential implications for national election infrastructure) impacted how decision-makers reacted to developments.The initial results reflected what we’ve known about crisis behavior in cybersecurity for a while: the more novel an incident, the more unpredictable stakeholder reactions will be. This, however, is mediated by experience and levels of training. Good news for CISOs so far.However, the impact of these mediating effects disappears for decision-makers who see AI as likely to replace extensive elements of their profession vs. those who either see it as augmentative in the future or only likely to replace limited professional functions.Oversimplification around AI, in short, neutralizes the traditional advantages wielded by organizations that see novel threats in their future (i.e. hiring seasoned professionals with diverse career experiences and establishing better training for all).
What CISOs can do to avoid oversimplified thinking around AI
While the research may provide evidence of the Great Machine problem within professional security practice, the good news is that there are many potential solutions.
Diversity in AI training
One clear solution to the problem of technology oversimplification is to tailor AI training and educational initiatives towards diverse endpoints. Research clearly demonstrates that know-how of the underlying functions of security professions has a real mediating effect on the excesses of encountering disruptive, unfamiliar conditions.The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology.AI education and training must emphasize the variability of outcomes based on social, political, commercial, and security decision inputs. Cybersecurity employees must be guided as much as possible toward understanding the path-dependent effects that exist with regards to variables such as differences in data used for training, bias in interfaces used to consume and annotate incoming information, and more.A particular opportunity for this objective would be the establishment of penetration testing requirements that engage a cross-section of workforces that are adopting new AI tools. In other words, new platforms or systems must be tested by representative cross-samples of the security populations that might use them, necessitating a requirement for adopters or developers to offer accessibility testing options to users at the lowest possible skillset denominator.
Temporality matters
As has been written about elsewhere, it would also be wise for cybersecurity professionals and their organizations to explicitly support the deployment of attritable outcomes. This means that AI systems in this early era of development and deployment should be functionally designed to fail, not via systems failure but by obsolescence in a relatively short (a few years) timeframe.Doing so would help keep timeframes of feasible use clear in the minds of users and should also help insulate organizations from the pathological excesses of runaway oversimplification.
Comparative thinking is nuanced thinking
Naturally, one of the value propositions of studies like the one presented here is the ability for professionals to see the world as another kind of professional might. Whilst tabletop exercises are already a core tool of the cybersecurity profession, there are opportunities to incorporate comparative applications’ learning for AI using simple simulations.In other words, professionals should be encouraged to put themselves in the shoes of others presented with the same tools they are being given access to. Of greatest utility for overcoming the Great Machine problem is a simulation that employs most similar approaches to comparison, such as asking individuals to do the job they already have under distinctly different conditions (such as with altered cultural parameters, national conditions, climate circumstances, etc.) and to assess their AI tools in context.
Stagnation is the enemy of workforce health
Finally, wherever possible, role rotation is of clear advantage to overcoming the issues illustrated herein. In testing, the diversity of career roles over and above career length played a similar role in mitigating the excesses of the impact of novel conditions on response priorities.Cross-pollination of ideas about the socialized role of different kinds of AI tools should substantially help safeguard against the issues caused by technological oversimplification, in addition to expanding technological know-how as a form of cultural competency.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/3613339/cybersecuritys-oversimplification-problem-seeing-ai-as-a-replacement-for-human-agency.html