Systems Disruption: Exploiting Interconnected Vulnerabilities Continuing with John Robb's "Brave New War" which introduced a prescient concept that has become increasingly relevant in our interconnected world: systems disruption. Robb argued that in modern conflicts, targeting critical infrastructure could cause widespread chaos and undermine a state's ability to maintain control. This strategy, focusing on exploiting vulnerabilities in complex systems rather than engaging in direct combat, has proven remarkably accurate in predicting the nature of modern threats.
The core of Robb's insight lies in recognizing the inherent fragility of our interconnected systems. As societies have become more technologically advanced, they've also become more dependent on complex networks of infrastructure from power grids and water supplies to financial systems and communication networks. While these interconnected systems offer tremendous benefits in terms of efficiency and capability, they also present a significant vulnerability. A well-placed attack (or bad update in the case of Crowdstrike) on a critical node can have cascading effects, causing disruptions far beyond the initial point of impact. Robb's concept goes beyond traditional notions of sabotage or infrastructure attacks. He recognized that in a world of complex, interdependent systems, relatively small actions could have disproportionately large effects. This asymmetry is particularly appealing to non-state actors or smaller powers who lack the resources for conventional military confrontations. By targeting key vulnerabilities in critical systems, these actors can potentially cause widespread disruption and chaos without the need for large-scale military operations. The psychological aspect of systems disruption is another key element of Robb's analysis. He understood that beyond the immediate physical or economic impacts, successful attacks on critical infrastructure could erode public confidence in the state's ability to provide basic services and security. This loss of confidence can be as damaging as the physical disruption itself, potentially leading to social unrest or political instability. Robb's foresight in identifying the potential for systems disruption has been validated repeatedly in recent years. Cyber attacks on power grids, ransomware targeting healthcare systems, and disruptions to financial networks have all demonstrated the vulnerability of our interconnected world, the Colonial Pipeline ransomware incident is a good example of this. These incidents have shown that Robb's concept of systems disruption is not just a theoretical construct, but a real and present danger in modern conflicts. As our reliance on technology continues to grow, so too does the potential for systems disruption. The rise of the Internet of Things, the increasing digitization of critical infrastructure, and the growing sophistication of cyber weapons all point to a future where the risks of systems disruption are likely to increase rather than diminish. In response to these threats, Robb emphasized the need for resilience in our critical systems. This means not just hardening defenses, but also building redundancy and adaptability into our infrastructure. The goal is to create systems that can withstand attacks or disruptions and quickly recover, rather than cascading into widespread failure. The CrowdStrike incident of July 19, 2024 serves as a good illustration of the risks inherent in centralized systems, particularly as it relates to cybersecurity. CrowdStrike, a major player in the cybersecurity industry, experienced a critical software update failure that cascaded into a widespread outage affecting millions of devices across various sectors. The incident began with a seemingly routine software update to CrowdStrike's Falcon platform. However, a misconfiguration in the update caused the security software to malfunction, effectively disabling endpoint protection for a vast number of clients simultaneously. This single point of failure in a centralized system rapidly escalated into a crisis affecting financial institutions, healthcare providers, energy companies, and government agencies. I experienced this first hand with my work laptop and later that morning trying to take the CCSP at a testing center as all the computers were BSOD. Banks reported disruptions in transaction processing systems, hospitals faced interruptions in accessing patient records, and several power plants had to switch to manual operations due to concerns about compromised industrial control systems. The ripple effects of this outage highlighted how deeply embedded CrowdStrike's services had become in critical infrastructure across multiple industries. What makes this incident particularly noteworthy is that it wasn't the result of a malicious attack, but rather an internal error, however this is what a crippling cyber attack could look like. This underscores a key vulnerability of centralized systems even without external threats, they can still fail catastrophically. The concentration of so many critical services under one provider created a single point of failure that, when compromised, had far-reaching consequences. The CrowdStrike outage should serve as a wake-up call about the dangers of over-reliance on centralized cybersecurity solutions and systems. It demonstrated how the very systems designed to protect against disruption can themselves become vectors for widespread disruption when they fail. This incident reinforces the need for diversity and redundancy in critical systems, echoing John Robb's warnings about the vulnerabilities created by our interconnected, centralized infrastructure.
0 Comments
Networked Warfare In 2007, John Robb's "Brave New War" introduced a radical new framework for understanding conflict in the 21st century. At the time, Robb's predictions may have seemed speculative, but they have since proven to be disturbingly accurate. Robb argued that the future of warfare would be dominated not by nation-states and traditional military forces, but by decentralized, networked insurgencies and super-empowered individuals who would leverage technology to disrupt societies in ways previously unimaginable.
This book, which I picked up as a freshman polisci major in 2007 shaded many of the papers I wrote, and as I sit here in 2024 writing this, the world has seen Robb's vision unfold in real-time. From the rise of ISIS to the ongoing conflict in Ukraine, from cyber attacks on critical infrastructure to the influence of tech billionaires on global affairs, the concepts outlined in "Global Guerrillas" have moved from the realm of theory to stark reality. When Robb introduced the concept of networked warfare in "Global Guerrillas" in 2007, it represented a radical shift from traditional military doctrine. Robb envisioned a world where decentralized groups, operating without rigid hierarchies, would challenge state powers through adaptability and resilience. Today, this form of warfare has become the norm rather than the exception, with many parallels to what we see in the cybersec world. The evolution of cyber warfare provides a perfect parallel to the rise of networked warfare in physical space. In many ways, cyber threat actors were the vanguard of this decentralized, agile approach that's now reshaping conventional conflicts. In cybersecurity, we've long observed how decentralized hacking groups and state-sponsored actors consistently outmaneuver more traditional, hierarchical defense structures. Consider groups like Anonymous or the countless ransomware gangs operating today. They function as loose collectives, often with members spread across the globe, coordinating their efforts through encrypted channels and dark web forums. This structure allows them to rapidly adapt to new security measures, share zero-day exploits, and launch coordinated attacks that are difficult to attribute or counter. This dynamic, which emerged in the digital worlds first due to the inherent nature of the internet as a decentralized network, has now manifested in physical conflicts. The ongoing war in Ukraine serves as a prime example of networked warfare in action, mirroring the tactics we've seen in cyberspace. Ukrainian forces, bolstered by volunteer battalions and local defense groups, initially employed a networked approach that allowed them to effectively resist a larger, more conventionally structured Russian military. These decentralized units operated with high autonomy, making decisions on the ground without waiting for orders from a central command. This flexibility proved crucial in responding to the fluid and unpredictable nature of the conflict, especially during the early days of the 2022 Russian invasion. However, the effectiveness of Ukraine's networked warfare tactics didn't go unchallenged. As the conflict progressed, Russian forces began to adapt, albeit slowly and at great cost. This adaptation underscores a key aspect of networked warfare - it's not a silver bullet, but rather a constantly evolving approach. Russia's shift became evident in several ways. They increased autonomy for frontline commanders and adopted smaller, more mobile units. Their information sharing improved, though still not matching Ukraine's speed. The integration of mercenary groups like Wagner, which often operated with more autonomy than traditional military units, allowed for more flexible tactics. Russia also ramped up efforts to disrupt Ukrainian communications through enhanced electronic warfare capabilities. This evolution mirrors what we see in cybersecurity, where threat actors and defenders are locked in a constant arms race of tactical innovation. The side that adapts faster and more effectively gains a temporary advantage, until the other side catches up. As many of us know, the attackers almost always have the advantage. The lesson here isn't that networked warfare doesn't work, but rather that its effectiveness depends on continual evolution and the ability to stay one step ahead of the opponent. Ukraine's initial success came from being more adept at networked operations than Russia. As Russia has slowly closed that gap, the conflict has entered a new phase where both sides are employing elements of networked warfare. This dynamic isn't unique to Ukraine. We've seen similar patterns play out in various conflicts around the world. The rise and fall of ISIS demonstrated how a decentralized network could rapidly gain territory and influence across multiple countries, challenging traditional state powers. Their use of social media for propaganda and recruitment mirrored tactics used by cyber threat actors. In Mexico, drug cartels operate through highly decentralized networks that extend their influence across vast territories and even into international markets. This structure makes them incredibly resilient and difficult to dismantle, much like persistent cyber threat groups. The 2020 protests and riots in the United States saw decentralized groups like Antifa rapidly mobilize and coordinate actions across multiple cities, often outmaneuvering more hierarchical law enforcement structures. The prevalence of networked warfare poses significant challenges to traditional military and security structures in both cyber and physical domains. State actors are being forced to adapt, moving away from rigid command hierarchies towards more flexible, mission-oriented command structures. However, this adaptation is often slow and hampered by institutional inertia. In cybersecurity, defenders often find themselves playing catch-up, constrained by organizational hierarchies, compliance requirements, and the need to protect vast attack surfaces. The "assumed breach" mentality that's become prevalent in cybersecurity is a tacit acknowledgment that networks will be compromised, the goal is now to detect and respond rapidly rather than trying to create an impenetrable perimeter. This mindset has carried over to physical conflicts. Ukrainian forces, adopting an approach similar to modern cybersecurity practices, operate under the assumption that Russian forces will break through at some point. Their networked structure allowed them to rapidly detect incursions and respond flexibly, much like a well-designed incident response plan in cybersecurity. The line between cyber and physical warfare will likely continue to blur. The skills and mindset required to operate effectively whether you're a cyber defender or a military strategist are remarkably similar. Adaptability, decentralized decision-making, and the ability to function as part of a resilient network are becoming the core competencies of modern conflict, regardless of the domain. The success of networked actors in recent conflicts, both in cyberspace and on physical battlefields, underscores Robb's prescient understanding of how technology and social dynamics would reshape modern warfare. However, it also highlights that networked warfare isn't a static concept, but a dynamic, evolving approach that requires constant innovation to remain effective. As the 21st century marches on, the ability to operate in a networked, decentralized manner and to continually evolve these tactics will likely become even more critical in determining the outcome of conflicts, both large and small, in all domains of warfare. The challenge for both state and non-state actors will be the continuous adaptation and innovation to stay ahead in this new generation of warfare. Inoreader I've noticed a lot of people in the Simply Cyber community each morning asking how to stay consistently informed about the latest cybersecurity news.
My answer is Inoreader. This RSS feed reader has been a game-changer for keeping up with cyber news. Inoreader has a free and a $10 monthly paid option. I use the free version, though paid, it looks like it has its benefits. After you sign up, add 'Cybersecurity Insiders' and 'Cyber Intelligence' to your feeds. These are excellent sources for the latest in cybersecurity developments and insights. Under a particular feeds drop-down setting, you can also go under the "More like this" option to find similar feeds to add if you wish. Create a dedicated 'Cybersecurity' folder in your Inoreader to keep your focus sharp and your information stream uncluttered. You can adjust the setting to your preferences on how long items stay in your feed or only to show unread items. I'm not unique in doing this; I saw this a few years ago from someone on Linkedin and wanted to pass it along now. Go check it out, they have a handy guide for getting started. My Forgotten Hobby from the Fading Tech Frontier In 2004, the internet was a different world. Social media was in its infancy, smartphones didn't exist, and the web was still very much a digital frontier. It was in this context that I stumbled upon Mike Outmesguine's "Wi-Fi Toys". In 2004 was a teenager with more curiosity than sense, thumbing through a book that would unknowingly pop into my memory as I relearned a lot of networking basics.
"Wi-Fi Toys" was like a cookbook for tech mischief, filled with projects and was a gateway to a world of possibility. My pride and joy, and magnum opus of this time in my life was a war driving rig cobbled together from a basic laptop which was little more than a word processor with a WiFi card slot. I'd modified a wireless card with some coax cable and solder, a DIY hack that made me feel like a proper nerd and a skill I regrettably let lapse until last year. This setup became my trusty sidekick, mapping WiFi networks in my hometown long before I understood the privacy implications. This setup came in handy a few years later when my family disconnected the internet when I went off to college. The irony? While I was unknowingly engaging in actual network exploration, I thought I was hot stuff for accessing IRC via telnet on school computers. I wince thinking back on my misplaced pride, then again, I was barely 16 and found something no one else around me was doing, and considered it like a game. The book delved into antenna theory, which like soldering, I wish I remembered more of now that I've gotten into HAM radio. The DIY antenna projects, Pringles can designs, cantennas, biquad builds were more than just fun tinkering. They provided real-world lessons in signal propagation and gain, offering a ground-level understanding of wireless networking's physical layer whether I realized it or not. Today, the only projects from the book that might still hold water are the antenna designs and possibly the solar WiFi repeater concept. But even these are largely outclassed by off-the-shelf products for most applications. The DIY solar repeater, while an interesting project, would be a security nightmare in today's landscape. You might argue it could be useful for a farm in an area with no cell service, but even that's a stretch given solutions like Starlink. One project I vividly remember was the car-to-car video conferencing setup. By today's standards, it was about as elegant as a brick phone, but back then? The idea of video calling between moving vehicles blew my mind. It was peak "because we can" energy, the kind of wonderfully impractical experiment that defined that era of tech tinkering. But here's the kicker, and my biggest regret. After high school, I put all of this aside. This passion, this knack for hands-on tech exploration, got shelved as I pursued other directions. It wasn't until years after college that I rediscovered my love for tinkering with technology. Looking back, I can't help but wonder: what if I'd recognized this passion for what it was? Where might I be now if I'd nurtured that spark instead of letting it smolder? Don't get me wrong, the experience wasn't wasted. That ground-level understanding of how networks function? It's been invaluable in my cybersecurity career which I didn’t even plan on getting into until 2019, even as the technical details have evolved and I had to learn more than just the absolute basics. "Wi-Fi Toys" taught me to think creatively about technology, to understand systems by building and occasionally breaking them. I keep this book on my shelf now, a sort of personal time capsule. It reminds me of an era when the internet still felt like uncharted territory, before it became the highly regulated, security-conscious space we have today. For anyone just starting in tech or cybersecurity, the projects in "Wi-Fi Toys" might seem quaint or even ancient. But the underlying lesson? That's timeless. Get your hands dirty. Dive deep into systems. That kind of hands-on understanding is crucial, even if the tools and challenges have changed. Like most things, the only constant is change. What's cutting-edge today will be obsolete tomorrow. The real skill isn't in mastering any particular technology, but in cultivating that curiosity and adaptability that helps you keep pace with that constant change. My past with "Wi-Fi Toys" taught me something crucial: it's never too late to rekindle an old passion. Sure, we can't turn back the clock to those Wild West days of the early internet. But that spirit of curiosity, that drive to take things apart and see how they tick? That's something we can and should carry forward. So my advice, borne from experience: recognize your passions. Nurture them. And if you've let one lie dormant, don't be afraid to dust it off and see where it leads. It's never too late to make up for lost time. AHA CritiqueThe AHA, speaking for nearly 5,000 hospitals, has some legitimate beefs with CISA's proposed rules. They're not totally off base, but some of their arguments need a reality check.
First, the valid concerns. The AHA's gripe about multiple, overlapping reporting requirements from various agencies is spot on. It's a bureaucratic goat-rodeo that helps no one. Hospitals shouldn't need a team of lawyers just to figure out who to tell when it goes off the skids. CISA should take the lead in harmonizing these requirements across federal and state levels. One streamlined system would make compliance easier and improve the quality of incident data. The AHA is also right to highlight the operational burden during an active cyberattack. When ransomware's encrypting patient records, the last thing a hospital needs is to get bogged down in paperwork. The suggestion to simplify initial reporting and follow up with details later is sensible. It strikes a balance between immediate action and thorough documentation. However, the AHA's arguments start to fall apart with their resistance to the 72-hour reporting window is frankly crap. Nobody's expecting a full post-mortem in three days. It's a simple notification that something's amiss. If the mouth breathers at the TSA can manage this timeframe, hospitals can too. This early warning system is vital for mitigating the attack and minimizing fallout. The AHA's hand-wringing over two-year data retention is equally misguided. Cyber investigations aren't CSI episodes wrapped up in an hour. Sophisticated attackers can lurk in systems for months or years. Historical data is crucial for understanding their tactics and plugging vulnerabilities. Their emphasis on the burden to smaller hospitals, while understandable, misses the forest for the trees. Cybercriminals don't discriminate based on hospital size. In fact, smaller institutions often make softer targets. Instead of pushing for broad exemptions, the AHA should be advocating for targeted support and resources to help smaller hospitals meet these critical standards but that costs money, and money is tight. Money, now that's clearly a sticking point. Yes, effective cybersecurity and incident reporting cost money. But you know what costs more? Getting your entire system locked up by ransomware or facing massive lawsuits over breached patient data. It's time for healthcare executives to wake up and smell the malware. Cybersecurity isn't an IT problem, it's an existential threat to their operations. Maybe it's time to redirect some of those bloated C-suite salaries into actual security measures. The AHA's fear of legal and reputational risks from incident reporting, despite CISA's anonymity assurances, seems overly paranoid. Properly anonymized data can provide crucial insights without exposing individual institutions. This isn't about naming and shaming; it's about building a collective defense against evolving threats. The call for stronger anonymity guarantees in reporting is crucial. Hospitals need to know they can be honest without painting a target on their backs for lawsuits or reputational damage, however if criminal negligence is involved it should be known about and there should be punitive measures, in my opinion. Healthcare is under constant, sophisticated cyberattack and many of these incidents exploit known vulnerabilities that could be mitigated with better defenses, due diligence, and information sharing. The AHA's resistance to comprehensive reporting requirements is short-sighted and potentially dangerous. CISA may or may not be be a lot of things but it isn't the enemy here. They're trying to build a coordinated defense against threats that are only getting more sophisticated and dangerous. The AHA and its members need to be part of the solution, not roadblocks to progress. Instead of fighting these necessary measures, the AHA should be working with CISA to refine and implement them effectively. They should be pushing for more resources, better training, and streamlined processes, not trying to water down critical security measures. In the end, this isn't just about compliance or avoiding fines. It's about protecting patients, safeguarding critical infrastructure, and maintaining trust in our healthcare system. The AHA needs to recognize that healthcare is critical infrastructure and a component in national security and that these reporting requirements, while challenging to implement, are essential for the long-term health and security of the entire sector. The Dystopian Subtlety of AI in the Modern Workplace Popular media often imagines AI as a dystopian force, conjuring images of rogue machines of Terminator or the Matrix threatening humanity's very existence. However, the reality of AI's infiltration into our lives, particularly in the professional sphere, is far subtler and perhaps more insidiously banal. Rather than the overtly malevolent murder-robots of "Terminator", the AI we encounter is more akin to the benign yet oppressive presence of “AUTO” in "WALL-E". It doesn't seek to violently overthrow or exterminate humanity, but instead slowly integrates into our corporate ecosystems, becoming an omniscient overseer embodying the strictest traits of HR and middle management. Now, imagine a future where every keystroke is tracked, productivity measured to the second, and work patterns relentlessly analyzed, all in the name of optimization. This Orwellian oversight won't come from a human manager, but from AI systems single-mindedly designed to maximize efficiency at all costs. It might not be hard to imagine, because some in corporate management are already tracking workers via Teams. From monitoring coffee breaks to algorithmically streamlining creative processes, these systems could reduce the autonomy and nuance of human work to mere datapoints in service of hyper-efficiency. HR departments may morph from defenders of the company line against workers into custodians of AI-generated performance metrics that may or may not determine who gets hired, and who gets fired. Our professional existence could soon be defined by an uneasy balance between AI's empowering capabilities and its coldly oppressive impulses, inexorably driven by impersonal corporate cost-balancing imperatives. We may retain nominal control, but find our judgment persistently shaped by the ruthless logic of artificial "intelligence". The Duality of AI: Empowerment and Oppression Yes, there's no denying AI's immense power to augment individual capabilities. I've personally leveraged AI to create this website without web development expertise and generate digital art despite my limited artistic talent. For individuals, the empowering potential is clear. However, I remain deeply skeptical of AI's broader societal implications. While amplifying individual abilities, AI also has an undeniable capacity to enable insidious oppression of entire groups. Hyper-empowered individuals wielding AI could accelerate society towards an entropically fractured state that "WALL-E" bleakly satirizes. The legend of John Henry, "The Steel Drivin' Man", serves as a potent metaphor for the human spirit's struggle against the march of technological progress. Battling a steam-powered hammer, John Henry symbolizes our distinctly human capacity to resist being subsumed by machines. Today, AI has assumed the steam engine's role as a transformative force disrupting industries and reshaping labor. In cybersecurity and beyond, AI is revolutionizing fundamental paradigms and processes. We must critically examine how we harness this power. Ideally, AI would amplify human effort by providing productivity-boosting tools to augment rather than replace human expertise. It can streamline mundane tasks like scheduling, data analysis, content generation, and social media management freeing up immense human potential, and more importantly time. The goal should be fusion, not cold elimination. However, we must remember John Henry's cautionary tale. His Pyrrhic victory over the steam drill came at the ultimate price, complete depletion, and death by exhaustion. While we embrace AI's tantalizing efficiencies, we cannot lose sight of the human toll and cost to qualities like creativity and empathy. Balance is key. Let AI be a powerful hammer that handles the tedious, repetitive tasks, but humans must retain agency over the nuanced, complex work of creative strategy and meaningful decision-making. The power of AI is undeniable, but so are its risks. We must proactively shape its trajectory, remaining clear-eyed about both the advantages and perils. Steadfastly ground its development in an unwavering commitment to empowering human potential rather than enabling its marginalization or substitution. Lessons from the Dune Universe (Butlerian Jihad 2024) The Butlerian Jihad serves as a plot device, that Herbert used to sidestep technological issues and focus on social and philosophical themes in his and later his son’s works. The absence of computers and other "thinking machines" in the original novels stems from this Jihad. The term "Butlerian Jihad" may allude to Samuel Butler's 1872 novel "Erewhon," where society destroys machines fearing they might out-evolve humans.
Additionally, Herbert’s works reflect Heidegger's idea that technology makes humans think like machines, which Herbert saw as limiting human potential and evolution. Heidegger argued that technology frames everything in terms of efficiency and control, leading humans to adopt a deterministic mindset. Herbert's elimination of AI in the Dune universe highlights the importance of human creativity and evolution, critiquing the over-reliance on technology and advocating for a more human-centric approach to progress. But anyways, to get to the point. The Dune lore explores the complex history between humanity and technology, epitomized by the Butlerian Jihad, the cataclysmic revolt against the domination of thinking machines which sets the technology of the Dune universe. At its core, the Butlerian Jihad embodied an existential fear of machines usurping control over human decisions and destiny. It represented an impassioned fight to reaffirm human sovereignty against machine tyranny. The trauma of this epoch indelibly shaped mankind’s trajectory, instilling an abiding aversion to any technology mimicking human cognition. This echoes contemporary anxieties around AI eroding human agency and individualism. The Jihad's animating concern was the specter of humans themselves beginning to think like machines, processing data devoid of wisdom, empathy, or ethics. Such capitulation threatened to dehumanize society, extinguishing the vital sparks of creativity, nuance, and free will that define the human spirit. These fears resonate powerfully today, as we grapple with AI's increasing ubiquity. AI systems optimized for efficiency and productivity could incentivize a mode of human cognition that prizes cold rationality over the "fuzzy logic" of emotions, intuitions, and insights. When algorithms dictate decisions, the scope for distinctly human faculties like gut feelings, non-linear connections, and flashes of improvisation can atrophy. Conformity to machine-like paradigms risks flattening the vibrancy of human thought into a gray-scale of programmatic "intelligence" unmoored from human values. As we grow reliant on AI's superlative pattern-recognition and data-crunching abilities, we risk outsourcing core human strengths such as our capacity for critical thinking, creative problem-solving, and contextual empathy. The vitality of human thought rich in diversity, adaptability, and inspired leaps of imagination is both our most precious asset and most vulnerable trait. In the Dune universe, the post-Jihad order wisely embraced human-centric disciplines and technologies designed to augment rather than replace human faculties. Mentats, the Bene Gesserit, and the Spacing Guild all represent models for achieving synergy between cutting-edge knowledge and foundational wisdom, as fantastical as they are in the story. Importantly, they underscore the necessity of preserving human sovereignty over existential functions and maintaining channels for applying human values to technological capabilities. They remind us that while technologies like AI are incredibly powerful tools, they must remain instruments of human intention, discernment, and ethics. The core challenge is ensuring that artificial "intelligence" enriches rather than diminishes our most essential, human qualities. We must enshrine values like fairness and transparency as non-negotiable principles in AI development, and individual use. Crucially, even as we harness AI's pattern-matching prowess, we cannot allow it to impose a tyranny of hyper-efficiency that tramples human autonomy, individual expression, and our capacities for abstraction and imagination. These are the vital antibodies to automated conformity. Dune powerfully illustrates both the existential necessity and daunting difficulty of preserving a recognizably human future amidst technological disruption. By heeding its warnings and embracing its wisdom, we have a chance to sculpt an AI-enabled future that remains animated by the most precious human values, our creativity, free will, empathy, and critical reasoning. The alternative is passively sleepwalking into a “WALL-E” world where human agency has withered under the weight of algorithmic nudges and this is simply not acceptable. We must vigorously engage in shaping AI's trajectory rather than allowing ourselves to be shaped by it. The choice between empowerment and abdication is stark. You Aren’t the Cyber Police & The Consequences Are Still The Same A commonly held misconception, even by practitioners in the field, which I commonly see on Reddit and sometimes Linkedin, portrays cybersecurity professionals as rigid enforcers who stubbornly enforce restrictive policies while impeding business operations or just the “Cyber Police.” This perception is misguided, oversimplified, and honestly naive to believe to begin with.
Technical proficiency is a must for cybersecurity professionals as we have to maintain comprehensive and up-to-date knowledge of the vulnerabilities and attack vectors that malicious actors continually attempt to exploit. Technical understanding allows us to assess risks within business operations accurately. Rather than acting as authorities flexing control, influential cybersecurity advisors facilitate risk management decisions aligned with an organization’s objectives as set by leadership. Effective cybersecurity facilitates symbiotic risk management through an open exchange of perspectives. In an ideal world “our” job is to accurately communicate threat realities while leadership conveys their risk tolerances given financial, operational, and business pressures. Many of us have been in meetings where management leaders explicitly accepted risks that security strongly advised them to mitigate. While frustrating, it’s important to understand that organizational leadership balances acceptable risk postures with competing priorities such as profitability and operational efficiency. If they deem the risk to be within their appetite, that’s on them ultimately. While security advisors may harbor cynicism over risk acceptance decisions that induce trepidation, channeling this into negativity or combative posturing is counterproductive. The objective must focus on pragmatic advisement that balances organizational realities with substantiated risk mitigation guidance. Go home, cash your check, and do it all again the next day. You did your job. Finding ValueThis is essentially a recap of my SimplyCyber Community Challenge post over on Linkedin with some thoughts I picked up after Christian Hyatt of risk3sixty‘s talk at BSides Atlanta (2023) titled “The Art of Service: 5 Lessons Learned About Life, Leadership, and Business From Building a Cybersecurity Company.” Cybersecurity is often portrayed as reserved for those with a linear, tech-heavy background requiring a Comp Sci degree. My journey, however, took an unconventional path that not only led to a fulfilling career in cybersecurity but also revealed the unique value I bring to the field. My first flirtations with computer science in high school involved tinkering with networks, building PCs, and dabbling in code. Despite being steered away from this path by the public school system, the spark never died. It manifested in hobbies that danced around the edges of hard tech, from crafting USB hacking tools to war-driving setups. Little did I know these experiments would resurface a decade later. At West Virginia University, I majored in international studies and geography, focusing on intelligence & national security, which introduced me to the power of open-source intelligence (OSINT). Here, I discovered a passion for the innovative and novel applications of tech for security. This interest would later become the cornerstone of my cybersecurity career in the form of script automation and spreadsheets. Case in point: utilizing GIS for the statistics of terrorism and conflict, which evolved into a capstone project called “Visualizing Terrorism from 1970 to 2013,” was successful in large part due to having to learn Python and advanced Excel tricks to process the raw amount of data and functions within ArcGIS. After a few failed GIS interviews after graduation, my career trajectory detoured through Texas. I navigated roles that seemed light-years away from my tech interests. Yet, each position, from admin duties to database management, was oblivious to me, building a toolkit of skills that would become gold in cybersecurity, especially the soft skills of communicating with non-tech types. I was catapulted into a “Supply-Chain Data Manager” role in D.C. during a start-up’s chaotic “restructuring,” during which the supply-chain manager was fired, and his duties landed on my Data Analyst desk. While unrelated to cyber, this mixed bag of experience exposed me to more complex data intricacies, process efficiency, and the importance of a well-oiled operational machine. Unfortunately, the start-up did what most start-ups do: implode. During the, very rough, 18-month interlude that followed, I ping-ponged between temp gigs and a relentless, albeit fruitless, interview grind. Accommodations often meant crashing on a friend’s couch or roughing it in a car in a gym parking lot across Texas and the Virginias. I eventually boomeranged back to WV and picked up day work when I could deliver furniture. Amidst this chaos, I started dusting off my Python skills on Coursera and prepping for the Sec+ and CCNA certifications. Unfortunately, the financial burden of the exams kept them out of reach. I finally found my footing in a supply chain gig in D.C., which required me to sleep on a bed roll in the office closet in a friend’s apartment just to accept the job. While I never loved or wanted to be in supply-chain or procurement, this role exposed me to Lean Six and Kaizen process improvement training, concepts I often reflect on in cybersecurity processes. This position was a game-changer, as my physical proximity to the cybersecurity and networking team in the cube farm, coupled with supportive management, led to a career pivot. The arrival of the COVID-19 pandemic and the shift to WFH were the catalysts for me to enroll in WGU’s cybersecurity and information assurance program. This culminated in my current role on the security team during a corporate restructuring. Imposter Syndrome might be my oldest friend. It took some time and I still deal with it, but as I transitioned into my security team role, it clicked: my value in cybersecurity was rooted in my Frankensteined together background. My ability to dissect complex data, understand operational workflows, manage crises, and (perhaps most importantly) communicate effectively with the non-technical crowd wasn’t just an asset; it was a differentiator that allowed me to tackle challenges with a fresh perspective.
The epiphany came when I recognized the parallels between my experiences and the various aspects and niches of cybersecurity. My analytical chops from GIS, the crisis management skills I picked up in the reserves, the soft skills polished in the UTD media office, and the operational insights gained through supply-chain management all formed a Swiss army knife of insights, tools, tricks, and experiences that been useful during my transition into cybersecurity. Value often lies hidden in the sum of our experiences. Discovering my value, the “what makes me relevant” in cybersecurity, wasn’t a lightning bolt moment but a gradual process. It was the realization that the skills and experiences I had collected along the way weren’t random or useless; they were the building blocks of my unique professional identity that could maybe even be defined as a brand. We all know it by now, but finding your place in cybersecurity (and life) is a messy, chaotic journey. My path has been a wild patchwork of experiences, but it’s taught me that value in this field goes beyond mere technical prowess. It’s about the unique combination of skills, perspectives, and hard-won wisdom each person brings. Consider this a continuation of my musings on discovering your value and relevance with a dash of philosophy via the Japanese concept of Ikigai. Let’s be honest; most of us didn’t walk easily into cyber via some smooth, traditional path. Your journey likely meandered through a menagerie of industries, roles, and existential crises. But here’s the thing: each of those experiences armed you with a one-of-a-kind toolkit and perspective. The sum total of your story, like a mosaic or a patchwork of GitHub snippets and ancient Stack Overflow lore, makes you invaluable. Embrace the twists and turns because they’ve forged your approach to problem-solving and innovation. Certifications and courses are important milestones. However, the real value is a commitment to lifelong learning in a field where the tech landscape is shifting rapidly. Whether you went the traditional brick-and-mortar route or are a self-taught savant, your drive to stay current and adaptable showcases your worth. Every gig you’ve had, every mundane retail stint, six months working at Panera, or four years bouncing at a nightclub imparted lessons that translate to cybersecurity. Wrangling complex projects, putting out fires, and managing teams were experiences that forged pure gold soft skills. Mine your work history for the abilities you’ve honed and consider how they can be transformed into fresh methods, sharper leadership, or effective communication. Cybersecurity isn’t a solo gig, nothing really is. It thrives on collaboration and the ability to navigate cross-disciplinary teams. Forget the lone wolf fantasy; that path leads to burnout and disillusionment. Suppose you’ve juggled different departments and steered cross-functional projects in the military, retail, or hospitality. In that case, you’ve laid the groundwork for the collaboration skills vital in cyber, or really anything. The cyberworld is a vast, deep ocean with countless niches to explore. Your golden ticket could be discovering a specialty that capitalizes on your unique background. Maybe you’ve got a knack for OSINT or a burning passion for threat intel. Lean into those interests to unearth a bespoke role where your talents shine brightest. Pinpointing your value demands some serious soul-searching. It’s about assembling your experiences, skills, and curiosities into a crystal-clear picture of where you slot into the grand puzzle. My odyssey has taught me that value isn’t some generic commodity; it’s deeply personal. Your past isn’t just filler in your resume; it’s chapters that make your story gripping and your contribution priceless. |
Details
AuthorI'm Luke Canfield, a cybersecurity professional. My personal interests revolve around OSINT, digital forensics, data analytics, process automation, drones, and DIY tech. My professional background experience includes data analytics, cybersecurity, supply-chain and project management. ArchivesCategories
|