Monday, February 2, 2026

On Neumann's Paper "Towards Total-System Trust worthiness"- We're Building Houses of Cards

As one of my New Year's goals, I have committed to writing a few blog posts a month, and this is a good opportunity to use class work to express my rants and ramblings while getting some learning done in the process.  

Peter G. Neumann is a legend in computer security for good reason. His article "Toward Total-System Trustworthiness" names something most of us in technology leadership sense but rarely articulate: we're playing a losing game. Every patch, every wrapper, every clever workaround adds another card to a structure that was never designed to bear the weight we're placing on it.

The Southwest Airlines meltdown brought this into sharp relief. A classmate in the discussion posts that I answered to earleir today pointed out that their catastrophic failure during the winter storm wasn't a technology problem—it was an archaeology problem. Southwest had essentially wrapped an old 1990s-era scheduling system called SkySolver in newer interfaces, hoping the wrapper would compensate for foundations that were never updated for modern scale. When the storm hit, the sheer volume of data overwhelmed the underlying logic, and no amount of clever interfacing could save it.

Neumann calls this the "patch-on-patch" approach. I call it "technical debt" from days in software engineering and leading developers or product development teams, which come with haunting reminders of bug fixing and facing the music from unhappy customers, all coming due with compound interest.

Why Total-System Trustworthiness Remains Elusive

After twenty-plus years leading technology operations across global organizations, I've come to believe there are four fundamental reasons why achieving true system trustworthiness remains aspirational at best—especially when you're simultaneously responsible for keeping the lights on.

The "Less Untrustworthy" Objective: The "Less Untrustworthy" Objective. People tend to use simple binary categories when evaluating systems because they believe these systems exist in only two states: secure or insecure, and trustworthy or broken. Neumann presents this concept as a gradient that transforms the entire system. The main priority should be to minimize untrustworthy actions because we understand that humans will always fall short of achieving complete trust with one another. Medical practice requires drug interaction screening instead of achieving absolute treatment success. In systems, it means assuming your components will fail and engineering the resilience to absorb it.

The Legacy Trap: The Legacy Trap. Neumann supports a complete system reset approach, which becomes necessary when organizations attempt to add security features to their existing, outdated systems. The process of building a dormitory becomes similar to this situation because the poured foundation creates physical boundaries that determine the building's shape. The fundamental basis of computer systems is rooted in the x86 architecture and the C programming language, which were developed when cyber warfare as we know it today did not exist. A sinking foundation will never achieve perfection because any attempt to fix it through retroactive correction will be unsuccessful. We can only shore it up and resolve to build the next one differently.

Anticipating the "Space Aliens: I am reminded of a computer game I used to play :) Anticipating the "Space Aliens."Security professionals defend against space aliens using a humorous method that illustrates a basic threat modeling principle that seems ridiculous at first. Neumann explains that we cannot predict all environmental elements, including floods, earthquakes, and zero-day exploits, but we should design systems that continue to function properly during decline. A dependable system produces clear, limited failures rather than complete system breakdowns.

Designing with Humility: The most crucial element runs counter to the industry's conventional values, as it requires organizations to delay their responses, even though speed and self-assurance are typically prized. Recognizing the unresolvability of complex systems makes us more likely to implement observation systems that detect failures and compartmentalization, thereby preventing a single-floor slab crack from collapsing the entire roof structure, and to use formal methods to verify our ability to control specific system components.

The Leadership Paradox

The main thing that prevents me from sleeping is that technology leaders receive existing systems rather than designing their own. The situation forces us to take on dual responsibilities: maintaining existing systems and establishing their trustworthiness. The problem stems from organizational and philosophical factors rather than being a technical issue. Neumann shows that we need to stop using short-term security fixes and instead actively discuss fundamental system vulnerabilities. Our organization will protect against future system failures through our commitment to humility and our practice of sharing all operational procedures with others.

Our systems will inevitably experience failure, according to the question. We need to understand whether we can develop systems that will collapse yet still enable us to survive.


Friday, January 23, 2026

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

 

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

Last November, Adam Aleksic delivered a five-minute TED talk that landed harder than presentations three times its length. His premise is deceptively simple: the AI tools and platforms we use daily aren't showing us reality—they're showing us a filtered, amplified, distorted version of it. And we're absorbing that distortion without realizing it.

I've been sitting with this one for a while. As someone who lives at the intersection of technology strategy and organizational transformation, I recognize the pattern Aleksic describes. I've seen it play out in enterprise systems, in user behavior, in the subtle ways digital tools reshape the humans who use them.

The Language We're Learning Isn't Ours

Here's the detail that stopped me cold.

ChatGPT uses the word "delve" at rates far exceeding normal English usage. The likely explanation? OpenAI outsourced portions of its training process to workers in Nigeria, where "delve" appears more frequently in everyday speech. A minor linguistic quirk from a specific population was reinforced during training and is now reflected among hundreds of millions of users worldwide.

But it doesn't stop there. Multiple studies have found that since ChatGPT's release, people everywhere—not just users—have started saying "delve" more often in spontaneous conversation. We're unconsciously absorbing the AI's patterns and mirroring them back.

As Aleksic puts it: "We're subconsciously confusing the AI version of language with actual language. But that means that the real thing is, ironically, getting closer to the machine version of the thing."

Read that again. The real is conforming to the artificial.

The Feedback Loop No One Asked For

This isn't just about vocabulary. Aleksic points to Spotify's "hyperpop" genre as a case study in algorithmic reality creation.

The term didn't exist in our cultural vocabulary until Spotify's algorithm identified a cluster of similar listeners. Once the platform created a playlist and gave the phenomenon a label, it became more real. Musicians started producing hyperpop. Listeners began identifying with or against it. The algorithm continued to push, and the cluster expanded. What started as an algorithmic observation became a cultural movement.

The same pattern drives viral trends—matcha, Labubu toys (the world is going crazy), and Dubai chocolate (I see them everywhere from Costco to World Market). An algorithm identifies latent interest, amplifies it among similar users, and, suddenly, businesses and influencers create content around what may have been an artificially inflated trend. We lose the ability to distinguish between organic cultural shifts and manufactured momentum.

The Uncomfortable Question

Aleksic doesn't shy away from the deeper implications.

"Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because the limited training texts in Iran reflect the more conservative political climate in the region."

If AI systems inherit the biases of their training data—and they do—what happens when millions interact with those systems daily? What range of thoughts do we stop considering because the algorithm never surfaced them? What possibilities get filtered out before we ever encounter them?

Elon Musk regularly modifies Grok's responses when he disagrees with them, then uses X to amplify his own content. Aleksic asks the obvious question: Are millions of Grok and X users being subtly conditioned to align with Musk's ideology?

These platforms aren't neutral. Everything in your feed or your chatbot response has been filtered through layers of optimization—what's good for the platform, what makes money, and what conforms to the platform's necessarily incomplete model of who you are.

Thinking About Thinking

Twenty-two years in global technology leadership has taught me something about systems: they shape behavior far more than we acknowledge. The tools we build eventually build us. The interfaces we design become the cognitive architecture through which users experience their work, their relationships, their world.

What Aleksic describes is that phenomenon at civilizational scale.

"TikTok has a limited idea of who you are as a user," he notes, "and there's no way that matches up with your complex desires as a human being."

And yet we scroll. We engage. We absorb. We mirror back.

The Only Defense

Aleksic's antidote is persistent self-interrogation:

Why am I seeing this? Why am I saying this? Why am I thinking this? Why is the platform rewarding this?

Simple questions. Difficult discipline.

"If you're talking more like ChatGPT," Aleksic concludes, "you're probably thinking more like ChatGPT as well, or TikTok or Spotify. If you don't ask yourself these questions, their version of reality is going to become your version of reality."

There's something almost spiritual in that warning. The ancient disciplines of self-examination—examine yourselves to see whether you are in the faith—take on new urgency when the voices shaping our inner dialogue aren't human at all.

The question isn't whether these tools are useful. They are. The question is whether we're using them—or being used by them.

Stay awake. Stay questioning. Stay real.

Friday, January 16, 2026

Decoding the Attack Vector: Entry Points in the Digital Build

 

Attack Vectors and Attack Surfaces

In the world of physical security, you don’t just worry about "theft"; you worry about the unlocked window, the side door with the faulty latch, or the delivery driver who isn't who they say they are. In cybersecurity, these specific pathways are our Attack Vectors.

An attack vector is simply the "how" and the "where" an adversary gains unauthorized access to your network. While the Attack Surface is the sum total of your exposure, the Vectors are the individual paths leading into the heart of the system.

The Common Vulnerabilities (The "Leaky Pipes")

Identifying attack vectors is the first step in hardening your infrastructure. Here are the primary culprits we see in the field:

  • Social Engineering & Phishing: This is the "human exploit." Instead of hacking the code, they hack the person. Whether it’s a credential-stealing link or a deceptive PDF attachment, this remains the #1 entry point for ransomware.

  • Account Takeovers (ATO): This happens when identity management fails. Stolen session cookies, brute-forced passwords, or credentials bought on the dark web allow attackers to walk through the front door as a "trusted" user.

  • The Insider Threat: Whether malicious (the disgruntled admin) or accidental (the dev who leaves an S3 bucket open), the threat from within is often the hardest to mitigate because the "vector" is already inside the perimeter.

  • Vulnerability Exploits (The Unpatched Flaw): Software isn't perfect. Bugs in code are like faulty locks. If you’re running unpatched "Zero-Day" vulnerabilities, you’ve essentially left a master key under the welcome mat.

  • Infrastructure Misconfigurations: Open ports are the digital equivalent of leaving the garage door open. If a port isn't serving a specific business function, it should be closed. Period.

  • Browser & Application Compromise: Because we live in a "Cloud-First" world, the browser is the new endpoint. Malicious scripts (XSS) or "poisoned" third-party apps can turn a standard web session into a bridge for malware.

Hardening the Perimeter: Practical Mitigation

You cannot eliminate every vector—the only 100% secure system is one that is turned off and buried in concrete. However, you can make the "cost of entry" too high for most attackers.

  1. Identity as the New Perimeter: Use MFA and session monitoring to kill the effectiveness of stolen credentials.

  2. Aggressive Patching: Automate your updates. A vulnerability is only a vector if it remains unpatched.

  3. Browser Isolation: Treat the public internet as "untrusted" by default. Executing code in a containerized environment keeps the mess off your local network.

  4. SASE (Secure Access Service Edge): As we move away from the traditional office, SASE integrates networking and security into a single cloud-native stack, closing the gap between the user and the app.

The Bottom Line

Think of your security posture like a building's blueprint. You can't remove every door, but you can ensure every door has a deadbolt, a camera, and a guard. By systematically identifying and closing off attack vectors, you shrink your Attack Surface and force the adversary to look for an easier target elsewhere.

Friday, January 9, 2026

Five Things You Should Know About IT Risk Assessment

 

 Five Things You Should Know About IT Risk Assessment

Every organization faces data security threats. Hackers get smarter, attacks become more common, and security budgets stay tight. You can't protect everything equally, so you need to identify your biggest weaknesses and address them first.

That's what IT risk assessment does. It helps you identify, assess, and prioritize data security risks so you can focus your time and budget where they matter most.

Here are five things worth knowing about it.
At the higher education institute where I work, we created a thoughtful exercise using a simple Excel spreadsheet to outline every area or department that meets twice a year to self-evaluate their risks and the likelihood of impact. If you are interested, take a look at the sample sheet that you can download for your organization to 

1. Risk assessment tells you where to focus your security efforts

Risk assessment and risk management sound similar, but they're different. Risk management is about controlling specific problems. Risk assessment is the bigger picture work of understanding all the threats you face, both inside and outside your organization.

Think of it this way: risk assessment helps you see the full map of dangers. Risk management is what you do about each one.

A good risk assessment might reveal misconfigured user permissions, forgotten active accounts, or admin rights that have become out of control. Once you know about these problems, you can fix them before someone exploits them.

2. Many regulations require it

If your organization must comply with regulations such as HIPAA or GDPR, you likely need to conduct risk assessments. These regulations don't tell you exactly how to protect your systems, but they do require you to have security controls in place and be able to prove it.

Skipping risk assessment doesn't just leave you vulnerable to attacks. It can also lead to failed audits and expensive fines.

3. Frameworks make it easier to get started

You don't have to invent your own approach. Several well-tested frameworks exist that tell you what to look at, who should be involved, how to analyze what you find, and what to document.

Three popular options are OCTAVE (created by Carnegie Mellon University), NIST SP 800-30, and ISO/IEC 27001:2013. Pick one that fits your organization's size and needs, then adapt it as necessary.

All of these frameworks expect you to document your process. This creates a paper trail showing you're taking security seriously.

4. You have to keep doing it

Risk assessment isn't something you do once and forget about. Your IT environment changes constantly. New software gets installed, employees come and go, and attackers find new tricks.

A risk assessment from two years ago won't catch the inactive account someone forgot to disable last month or the permissions that have gradually gotten out of hand.

Make risk assessment a regular habit, not a one-time project.

5. The process has three basic steps

Risk assessment breaks down into three parts:

Find the risks. Look for weaknesses in your systems. Users may have more access than they need, your password policies may be too weak, or old accounts are still active.

Estimate how likely each risk is. Not every weakness will actually cause a problem. Consider how probable it is that someone could exploit each vulnerability you found.

Decide what to tackle first. Combine likelihood with potential damage. A risk that's both likely and would cause severe harm warrants immediate attention. Something unlikely and minor can wait.

The Bottom Line

Threats don't stand still, and neither should your security planning. Regular risk assessment keeps your defenses aligned with current risks rather than yesterday's problems.

If your last assessment is collecting dust, your security strategy needs an update too.

Saturday, December 13, 2025

When Persistence Beats Protection: MFA Fatigue, Data Brokers, and Why Your Identity Was Already Stolen


I've been thinking a lot about exhaustion lately. Not the kind that comes from long hours or complex projects—the kind that attackers are deliberately weaponizing against us. And it's working.


The Attack That Exploits Human Nature


MFA fatigue attacks are a clever tactic in cybersecurity: an adversary who recognizes that the weakest part of any security system isn't the cryptography or the firewall. It's the person at 11 PM who just wants the notifications to stop.


Here's how it works: an attacker steals your credentials—probably through phishing, or from one of the countless breaches exposing nearly every American’s personal information over the past two years. They try to log in. Your phone buzzes with an MFA push notification. You decline it. Another notification. You decline again. Then another. And another. Midnight arrives. You're exhausted. The notifications keep coming.


Eventually, a large percentage of people just approve the request to stop it. The psychology is simple but devastating. We've trained users to respond to prompts. We've built muscle memory around tapping "Approve." And attackers have learned how to weaponize that conditioning. This leads me to a sidebar that matters more than it might seem.


Kevin Mitnick passed away in July 2023 from cancer. For those who don't know, Mitnick was once the most wanted computer criminal in the US—a social engineering pioneer who served five years in federal prison. What's worth remembering isn't just his criminal past but his transformation into one of the most respected white-hat security consultants.



One of my book's reviewers and a close friend, Andrew Starvitz, met Kevin Mitnick. He had a metal lockpick set as a business card—perfectly fitting for someone who spent his career showing that most security is just theater if you understand human nature. Andrew Starvitz also met Frank Abagnale Jr. at a Novell NetWare event, which dates us quite a bit. Abagnale's story—immortalized in "Catch Me If You Can”—follows a similar arc: extraordinary criminal ability redirected toward protecting systems he once exploited. These transformations remind us that understanding the attacker’s mindset isn't just of academic interest. It's critical. The best defenders often think like the people trying to break in.

Speaking of transformations and justice, Ross Ulbricht—founder of the Silk Road marketplace—received a full and unconditional pardon from President Trump in January 2025 after serving more than a decade of his double life sentence, plus forty years. Whatever your views on the case, Ulbricht's release reflects ongoing national discussions about proportional sentencing in tech crimes.


The breach that affected everyone


But here's a development that should keep you awake at night: the National Public Data breach of 2024.

NPD was a data broker—a company that collects, combines, and sells your personal data without your permission and largely without your knowledge. A cybercriminal known as "USDoD" compromised their databases starting in late 2023, exposing about 2.9 billion records and affecting over 272 million people. Names. Addresses. Social Security numbers. Phone numbers. Emails.


The company didn't publicly confirm the breach until August 2024, months after the data was already circulating on the dark web. The owner of Jerico Pictures, Inc.—which does business as National Public Data—is Salvatore Verini, Jr., a former Florida law enforcement officer. He was trusted with hundreds of millions of Americans' most sensitive personal info, stored it on insecure systems, and faced no real criminal consequences when it was stolen and leaked.


Let me be clear: in the past two years, almost every American’s personal information has been compromised through no fault of their own. Our government has been painfully slow at protecting consumers. Data brokers operate with little regulation, peddling our data with no accountability and no real way to opt out. You can check if your data was exposed at npd.pentester.com. Spoiler: it probably was.


The way forward: Passkeys


So, what can you do? Passkeys are the biggest leap forward in authentication security in decades. Passkeys replace passwords entirely with a public-private key pair linked to your device. When you verify your identity, your phone or computer uses biometric verification—your fingerprint, face, or PIN—to unlock a private key. This private key never leaves your device. No password to steal. No credentials to automate attacking.


The security is impressive: passkeys are resistant to phishing because they’re cryptographically tied to specific websites. They eliminate the threat of credential reuse. And they're more convenient because you use familiar authentication methods to unlock your device.

Major platforms now support passkeys. Google, Apple, Microsoft, and most big services have adopted them. Adoption is still early, but this is where authentication is headed.


The conclusion


MFA fatigue attacks succeed because our security systems rely on human vigilance, which ignores human limits. Data brokers have built an industry to gather and sell the info that enables these attacks, while laws lag far behind the threat. And breaches keep happening.


My recommended defense: turn on passkeys everywhere possible. Use number-matching MFA where passkeys aren't available. Never approve an auth request you didn't start—if you keep getting notifications, it’s an attacker, not a glitch. And assume your info is already compromised, because statistically, it probably is. We’re in an environment where identity theft isn't just a risk; it's a reality we must navigate. The real question isn't if your data is out there but how well you can limit the damage.

---------------

What's your experience with passkey adoption? Have you seen MFA fatigue attacks in your organization? Are the tools we're using keeping up with the threats?


Would love to hear your thoughts. Drop me a line at dr.samkm@protonmail.ch

Friday, November 28, 2025

The Architecture of Genius: How Elon Musk Built SpaceX on Failure (and Ignored 99% of the Experts)


After dissecting the Five-Strategy Framework in my last post (the one about scaling, remember?), I thought I was done with deep dives for the week. Then, during a mindless LinkedIn scroll—we all do it—this article about SpaceX’s collapse-to-conquest story absolutely snagged my attention. Full disclosure: I'm not here to fanboy over Elon Musk the person. But his sheer tenacity, that radical commitment to a First Principles engineering mindset, and the undeniable results of his leadership? Those are qualities I'm a permanent student of. So, let’s break down this alleged "GENIUS Framework" in my own words. I need to understand this architecture better, and maybe, just maybe, it’ll be the blueprint someone else needs today.


In 2008, SpaceX was a wreck. A financial black hole.


Three rockets. Three failures. $100 million gone. Elon Musk was down to his last $30 million, throwing it into the final launch.


The "experts" were unanimous: Cut costs. Play it safe. Pivot.


But Musk, ever the contrarian engineer, didn't just ignore 99% of the advice. He ignored the metric everyone else was tracking. He wasn't optimizing for profit margins, market share, or even successful launches.


He was obsessed with a single data point. The one that separates a pile of crumbling bricks from a towering skyscraper:


The Rate of Innovation.

That's it. How fast could his team iterate, learn, and improve compared to everyone else?

Musk treated engineering like a compounding asset. If SpaceX wasn't learning faster than NASA, they were, by definition, a dead company walking. This single-minded focus became the foundational architecture for the entire organization.


The Real Magic: Data from the Debris


This obsession created a culture where failure wasn't a funeral; it was precious data.


1. Flattened Hierarchy: Bureaucracy is a drag chute on speed. Musk killed the endless meetings and approval chains. The best idea—the one that moved the dial on the Rate of Innovation—won, no matter who proposed it.


2. Failure Analysis in Hours, Not Months: When a rocket failed, they didn't wait a year for a post-mortem report. They tore into the data in days, sometimes hours. While competitors were still fearing mistakes, SpaceX was celebrating the speed of their learning. By the time NASA figured out what went wrong on one test, SpaceX had already prototyped and tested three new solutions.


The ultimate takeaway? In this new culture, playing it safe was career suicide. The only true failure was not innovating. On September 28, 2008, the fourth Falcon 1 launch succeeded. It wasn't luck. It was the moment years of compressed learning finally paid off, laying the first solid brick in what would become a $350+ billion empire.


The GENIUS Framework: The Blueprint You Can Use

Musk’s strategy wasn't about being the smartest guy in the room (though he is). It was about constructing a system where learning and adaptation were the highest priorities.


Element Definition: The Architectural Principle How to Apply It
GGrind Fast Move fast. Launch fast. Learn fast. Perfection is the enemy of progress. Stop over-planning the perfect version 1.0. Get a Minimum Viable Product (MVP) out the door and iterate based on real feedback.
EEliminate Bureaucracy Kill the approval chains and flatten the hierarchy. Empower the engineers/doers on the ground to make quick, informed decisions without waiting for layers of sign-off.
NNormalize Failure Mistakes are not shameful; they are high-value feedback. Measure learning speed, not just success rate. If you fail fast and learn faster than your competitor, you are winning.
IIterate Relentlessly Use every single test, failure, or micro-feedback loop to immediately build version 2.0. Don't wait for quarterly reviews. Make iteration your continuous operating system.
UUnderstand the Core Problem Focus on first principles: "What is the fundamental problem we are solving?" Don't optimize a broken process. Deconstruct the problem down to its physics, and rebuild a better solution from the ground up.
SSpeed of Innovation > Size of Company Small, fast-learning teams will always beat slow, lumbering giants. Measure team effectiveness by their output velocity and learning curve, not their headcount.


The truth about company collapse is often overlooked: they rarely die because they run out of money immediately. They die because they stop learning. Elon Musk bet everything he had on the single, simple act of learning faster than anyone else on Earth. 

Final Thoughts: What can we learn from Elon Musk’s strategy?


                Don’t chase perfection - chase speed of learning.


                Flatten your process. Good ideas can come from anywhere.


                Build a culture where failure is feedback.


                Make iteration your superpower.


                Measure progress by rate of innovation, not just revenue.


The truth?

 
And that, my friends, is how you build a universe-changing business.

Thoughts this morning from South east Asia!

Tuesday, November 25, 2025

Big 5 Strategy Framework

 

Why the Big 5 of Strategy Framework Will Change How We Talk About Leadership

I've spent years watching leadership teams struggle with a problem they couldn't quite name. The strategy was sound. The people were talented. But something wasn't clicking. Execution stalled. Alignment fractured. And no one could articulate why.

Then I came across the Big 5 of the Strategy Competency Framework, and it finally gave language to what I've been observing across technology governance, institutional transformation, and organizational leadership.

The Core Insight

The research behind this framework uncovered a fundamental finding: five universal strategy competencies define how individuals and teams create, shape, and execute strategy. These aren't personality types or work styles. They're observable patterns in how people approach strategic challenges.

The framework operates across three dimensions. First, there's the continuum between thinking and doing—from strategic analysis to strategic execution. Second, there's the tension between stabilizing and transforming—what must endure versus what must evolve. Third, there's adaptability—how quickly we sense, learn, and adjust when conditions change.

Anyone who's led a major technology implementation or institutional transformation recognizes these tensions immediately.

The Five Competencies

Grasp the Present. See reality as it is, not as you wish it to be. This is the competency that prevents the strategic planning document from becoming organizational fiction.

Shape the Future. Envision what's next and chart a bold course. Every institutional transformation starts here—but dies without the other four.

Move the System. Mobilize people and structures to drive change. Strategy documents don't transform organizations. People who can move systems do.

Deliver the Results. Turn plans into outcomes through focus and discipline. I've seen too many brilliant strategies fail because no one owned execution.

Adapt to Change. Stay resilient and responsive to disruption. In volatile environments, this competency often determines survival.

Why This Matters for Leadership Teams

Here's what strikes me most: this framework explains why some teams are cohesive and adaptive while others spin their wheels despite individual talent.

The Big 5 reveals complementary strategic strengths within a group. A team heavy on "Shape the Future" thinkers but light on "Deliver the Results" executors will struggle differently than one with the opposite imbalance. Neither configuration is wrong—but both create predictable dysfunction if you can't see it.

For those of us leading technology transformations, building governance frameworks, or navigating institutional change, this isn't abstract theory. It's a diagnostic tool.

The Strategic Application

I see immediate applications in executive retreats and team alignment sessions—anywhere leaders need shared language for understanding strategic capability. It's equally valuable in coaching relationships, where concrete competencies beat vague development goals every time.

The framework also offers something the strategy world has needed: a way to treat strategic capability as measurable and developable rather than innate talent you either have or don't.

This is more than a model. It's a new lens for understanding how people think and act strategically—and how we can do both better.


What patterns have you observed in high-performing versus struggling leadership teams? I'd be curious whether this framework maps to your experience.