Saturday, February 7, 2026

Zero Trust Architecture - Implementation Challenges Analysis on a Paper

In one of the assigned readings, I read the paper  "A systematic literature review on the implementation and challenges of Zero Trust Architecture across domains" by Mustaq Ali et al., which reviews about 74 studies from 2016 to 2025. It explores how Zero Trust Architecture (ZTA) has been used across various technical and organizational scenarios and highlights the challenges encountered in these implementations.

In a previous course I took at Harvard (CSCIE-155, Networks & Security), I read Project Zero Trust by George Finney, which presents Zero Trust Architecture as a story. I recommend this one because it narrates the problem beautifully, in novel style: a hacker holds a fitness company hostage by stealing PII and threatening to make it public, and how the company's responders are beginning to implement Zero Trust Architecture to combat the attack and prevent future infiltrations and hacks. This paper spotlights the same core principles in its introduction. According to Finney (2022), ZTA is a strategy, not just a tool, yet the industry often treats it as a tool or framework. This difference is important and a key theme in Mushtaq et al.'s (2025) literature review.

Finney (2022) outlines six fundamental principles that define ZTA as a strategy rather than a product:
The first principle is to identify and define the protected surfaces. Instead of securing the whole network at once, organizations should focus on what needs protection most—the "Crown Jewels." Finney (2022) calls these DAAS: Data, Assets, Applications, and Services, which are the most important resources the organization must protect.




The second principle is to map transaction flows. This entails understanding how data flows within the organization, documenting how users and systems interact with protected surfaces, and defining normal traffic patterns (Finney, 2022).

Third, the focus moves to designing the network. This requires a careful, robust approach to micro-segmentation, creating a custom environment for each protected surface with micro-perimeters. Finney (2022) explains this as the "Kipling Method" Gateway, which is a specific entry point designed for each surface.

Fourth, organizations need to set a Zero Trust policy. The "Kipling Method," as Finney (2022) describes, involves formulating detailed rules about Who, What, When, Where, Why, and How traffic is allowed. Each access decision adheres to clear, context-based criteria.

The fifth and sixth principles go together: monitor and keep visibility using analytics, and keep improving over time. A security team that checks and logs all traffic in real time can spot problems and find ways to improve. This monitoring leads to regular reviews of protected surfaces and policies, helping organizations adjust to new threats and changing business needs (Finney, 2022).
With Finney's (2022) strategy in mind, the results of this literature review stand out. Mushtaq et al. (2025) show that, although the industry uses the language of Zero Trust, it has only partially put its ideas into practice across 74 studies.

The Gap Between Principle and Practice

Mushtaq et al. (2025) found that most ZTA implementations in all areas focus mainly on the basics: authentication, authorization, and access control. These are important, but they are only part of what a real Zero Trust setup needs.

What's consistently missing? Continuous auditing and monitoring, automated policy orchestration, and environmental or context-aware perception (Mushtaq et al., 2025). Mapped against Finney's (2022) principles, this means the industry has made progress on establishing policies and verifying identity (principles three and four), but has largely neglected the transaction flow mapping, immediate monitoring, and continuous improvement cycles (principles two, five, and six) that make Zero Trust a living strategy rather than a static configuration.
In other words, most organizations have locked the front door but have not installed cameras, alarms, or systems to detect when something is wrong inside. They have treated Zero Trust as just a tool, which is exactly the mistake Finney (2022) warns about.

Where It's Working — and Where It Isn't

Cloud and enterprise environments have made the most progress toward mature ZTA implementations (Mushtaq et al., 2025). This makes sense — these domains have mature tooling, well-defined architectural patterns, and the resources to commit to comprehensive security redesigns. Remote work acceleration during and after the pandemic pushed many enterprises to adopt Zero Trust principles out of necessity, and cloud providers have built native support into their platforms.
The story changes dramatically when you look at other domains. Two stand out as particularly challenging.


IoT: Too Constrained for Full Zero Trust

The Internet of Things represents one of the most difficult frontiers for Zero Trust adoption. Mushtaq et al. (2025) identified 11 IoT-focused studies, making it the second-most-studied domain — a reflection of both its importance and its complexity. The fundamental problem is resource constraints. IoT devices — sensors, embedded controllers, industrial monitors — often run on minimal processing power, limited memory, and constrained battery life. The cryptographic operations that Zero Trust demands (continuous authentication, encrypted communications, and token validation) can overburden these devices or drain them faster than they can be maintained (Mushtaq et al., 2025).

The authors identify a major gap: the lack of lightweight cryptographic solutions customized to these environments (Mushtaq et al., 2025). Standard enterprise-grade security protocols simply don't translate to a temperature sensor running on a microcontroller. Until the security community develops cryptographic approaches that are simultaneously robust and resource-efficient, IoT Zero Trust implementations will remain experimental and incomplete.

There's also the scale problem. An enterprise might manage thousands of user accounts. An IoT deployment may include tens of thousands of devices, each requiring its own identity and generating its own trust signals. The orchestration challenge alone is staggering, and most current solutions don't handle it well. Through Finney's (2022) lens, identifying and defining protected surfaces in an IoT environment — where DAAS elements are distributed across thousands of constrained devices — becomes exponentially more complex than in a traditional enterprise setting.

Healthcare: Where Compliance and Architecture Collide

Healthcare was the third-most-studied domain, with 7 studies, and it presents a distinct yet equally instructive set of challenges (Mushtaq et al., 2025). Here, the issue isn't primarily about device constraints — it's about the collision between Zero Trust principles and regulatory reality.
Healthcare organizations operate under strict frameworks such as HIPAA in the United States and the GDPR in Europe. These regulations have specific requirements around data access, audit trails, patient consent, and breach notification. Mushtaq et al. (2025) found that most ZTA implementations in healthcare struggle to fully conform to these frameworks, particularly in data administration, continuous auditing, and the explainability of automated access decisions.
Consider the tension: Zero Trust calls for dynamic, context-aware access decisions — a system might grant or revoke access to patient records based on real-time signals such as device health, location, or behavioral patterns. But HIPAA necessitates clear, auditable justification for every access decision. When an AI-driven trust engine denies a clinician access to a patient's records during a critical moment, the organization needs to explain exactly why — and the current generation of context-aware trust engines often can't provide that level of transparency.

This is where Finney's (2022) Kipling Method becomes both essential and difficult to execute. Writing granular rules based on Who, What, When, Where, Why, and How is precisely what healthcare regulators demand — but doing so dynamically, at scale, across a hospital's sprawling ecosystem of electronic health records, medical devices, telemedicine platforms, pharmacy systems, and insurance integrations is still a largely unsolved challenge.

What Needs to Happen Next

Mushtaq et al. (2025) don't just catalog problems—they point to a clear set of priorities for the field, many of which correspond directly with the strategic vision Finney (2022) articulated.
First, lightweight cryptography needs to move from research curiosity to production reality. Without it, Zero Trust will remain impractical for the fastest-growing categories of connected devices (Mushtaq et al., 2025).

Second, context-aware trust engines need to become more sophisticated and more transparent. Dynamic access decisions are powerful, but only if they can be audited, explained, and consistent with the regulatory contexts where they operate (Mushtaq et al., 2025).

Third, orchestration cannot be an afterthought. The hardest part of Zero Trust is not checking a single request, but rather maintaining a clear, enforceable policy across multiple systems simultaneously (Mushtaq et al., 2025). Finally, regulatory integration should be planned from the beginning, not added after the system is built. The difference between what Zero Trust systems do and what regulations require them to record is a major barrier to adoption (Mushtaq et al., 2025).


The Bottom Line


Zero Trust is the right approach. The idea of "never trust, consistently verify" makes sense in a world devoid of clear boundaries and rife with threats. However, this review shows that the industry is still in its early stages. Most implementations focus solely on access control and overlook the monitoring, orchestration, and compliance features that enable Zero Trust to function effectively (Mushtaq et al., 2025).

As Finney (2022) reminds us, Zero Trust is a strategy. It is a cycle of identifying what matters most, understanding how it is accessed, building protections, writing explicit policies, monitoring everything, and continually improving. The 74 studies reviewed by Mushtaq et al. (2025) show that the industry has started this journey, but still has a long way to go. The areas where this is most important—IoT, healthcare, and industrial systems—are also where the risks are highest. Getting Zero Trust right in these fields is not only a technical task. It is essential.

References

Finney, G. (2022). Project Zero Trust: A story about a strategy for aligning security and the business. Wiley.

Mushtaq, S., Mohsin, M., & Mushtaq, M. M. (2025). A systematic literature review on the implementation and challenges of Zero Trust Architecture across domains. Sensors, 25(19), 6118.





Monday, February 2, 2026

On Neumann's Paper "Towards Total-System Trust worthiness"- We're Building Houses of Cards

As one of my New Year's goals, I have committed to writing a few blog posts a month, and this is a good opportunity to use class work to express my rants and ramblings while getting some learning done in the process.  

Peter G. Neumann is a legend in computer security for good reason. His article "Toward Total-System Trustworthiness" names something most of us in technology leadership sense but rarely articulate: we're playing a losing game. Every patch, every wrapper, every clever workaround adds another card to a structure that was never designed to bear the weight we're placing on it.

The Southwest Airlines meltdown brought this into sharp relief. A classmate in the discussion posts that I answered to earleir today pointed out that their catastrophic failure during the winter storm wasn't a technology problem—it was an archaeology problem. Southwest had essentially wrapped an old 1990s-era scheduling system called SkySolver in newer interfaces, hoping the wrapper would compensate for foundations that were never updated for modern scale. When the storm hit, the sheer volume of data overwhelmed the underlying logic, and no amount of clever interfacing could save it.

Neumann calls this the "patch-on-patch" approach. I call it "technical debt" from days in software engineering and leading developers or product development teams, which come with haunting reminders of bug fixing and facing the music from unhappy customers, all coming due with compound interest.

Why Total-System Trustworthiness Remains Elusive

After twenty-plus years leading technology operations across global organizations, I've come to believe there are four fundamental reasons why achieving true system trustworthiness remains aspirational at best—especially when you're simultaneously responsible for keeping the lights on.

The "Less Untrustworthy" Objective: The "Less Untrustworthy" Objective. People tend to use simple binary categories when evaluating systems because they believe these systems exist in only two states: secure or insecure, and trustworthy or broken. Neumann presents this concept as a gradient that transforms the entire system. The main priority should be to minimize untrustworthy actions because we understand that humans will always fall short of achieving complete trust with one another. Medical practice requires drug interaction screening instead of achieving absolute treatment success. In systems, it means assuming your components will fail and engineering the resilience to absorb it.

The Legacy Trap: The Legacy Trap. Neumann supports a complete system reset approach, which becomes necessary when organizations attempt to add security features to their existing, outdated systems. The process of building a dormitory becomes similar to this situation because the poured foundation creates physical boundaries that determine the building's shape. The fundamental basis of computer systems is rooted in the x86 architecture and the C programming language, which were developed when cyber warfare as we know it today did not exist. A sinking foundation will never achieve perfection because any attempt to fix it through retroactive correction will be unsuccessful. We can only shore it up and resolve to build the next one differently.

Anticipating the "Space Aliens: I am reminded of a computer game I used to play :) Anticipating the "Space Aliens."Security professionals defend against space aliens using a humorous method that illustrates a basic threat modeling principle that seems ridiculous at first. Neumann explains that we cannot predict all environmental elements, including floods, earthquakes, and zero-day exploits, but we should design systems that continue to function properly during decline. A dependable system produces clear, limited failures rather than complete system breakdowns.

Designing with Humility: The most crucial element runs counter to the industry's conventional values, as it requires organizations to delay their responses, even though speed and self-assurance are typically prized. Recognizing the unresolvability of complex systems makes us more likely to implement observation systems that detect failures and compartmentalization, thereby preventing a single-floor slab crack from collapsing the entire roof structure, and to use formal methods to verify our ability to control specific system components.

The Leadership Paradox

The main thing that prevents me from sleeping is that technology leaders receive existing systems rather than designing their own. The situation forces us to take on dual responsibilities: maintaining existing systems and establishing their trustworthiness. The problem stems from organizational and philosophical factors rather than being a technical issue. Neumann shows that we need to stop using short-term security fixes and instead actively discuss fundamental system vulnerabilities. Our organization will protect against future system failures through our commitment to humility and our practice of sharing all operational procedures with others.

Our systems will inevitably experience failure, according to the question. We need to understand whether we can develop systems that will collapse yet still enable us to survive.


Friday, January 23, 2026

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

 

When Algorithms Shape Reality: How AI and Social Media Are Rewiring How We Think

Last November, Adam Aleksic delivered a five-minute TED talk that landed harder than presentations three times its length. His premise is deceptively simple: the AI tools and platforms we use daily aren't showing us reality—they're showing us a filtered, amplified, distorted version of it. And we're absorbing that distortion without realizing it.

I've been sitting with this one for a while. As someone who lives at the intersection of technology strategy and organizational transformation, I recognize the pattern Aleksic describes. I've seen it play out in enterprise systems, in user behavior, in the subtle ways digital tools reshape the humans who use them.

The Language We're Learning Isn't Ours

Here's the detail that stopped me cold.

ChatGPT uses the word "delve" at rates far exceeding normal English usage. The likely explanation? OpenAI outsourced portions of its training process to workers in Nigeria, where "delve" appears more frequently in everyday speech. A minor linguistic quirk from a specific population was reinforced during training and is now reflected among hundreds of millions of users worldwide.

But it doesn't stop there. Multiple studies have found that since ChatGPT's release, people everywhere—not just users—have started saying "delve" more often in spontaneous conversation. We're unconsciously absorbing the AI's patterns and mirroring them back.

As Aleksic puts it: "We're subconsciously confusing the AI version of language with actual language. But that means that the real thing is, ironically, getting closer to the machine version of the thing."

Read that again. The real is conforming to the artificial.

The Feedback Loop No One Asked For

This isn't just about vocabulary. Aleksic points to Spotify's "hyperpop" genre as a case study in algorithmic reality creation.

The term didn't exist in our cultural vocabulary until Spotify's algorithm identified a cluster of similar listeners. Once the platform created a playlist and gave the phenomenon a label, it became more real. Musicians started producing hyperpop. Listeners began identifying with or against it. The algorithm continued to push, and the cluster expanded. What started as an algorithmic observation became a cultural movement.

The same pattern drives viral trends—matcha, Labubu toys (the world is going crazy), and Dubai chocolate (I see them everywhere from Costco to World Market). An algorithm identifies latent interest, amplifies it among similar users, and, suddenly, businesses and influencers create content around what may have been an artificially inflated trend. We lose the ability to distinguish between organic cultural shifts and manufactured momentum.

The Uncomfortable Question

Aleksic doesn't shy away from the deeper implications.

"Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because the limited training texts in Iran reflect the more conservative political climate in the region."

If AI systems inherit the biases of their training data—and they do—what happens when millions interact with those systems daily? What range of thoughts do we stop considering because the algorithm never surfaced them? What possibilities get filtered out before we ever encounter them?

Elon Musk regularly modifies Grok's responses when he disagrees with them, then uses X to amplify his own content. Aleksic asks the obvious question: Are millions of Grok and X users being subtly conditioned to align with Musk's ideology?

These platforms aren't neutral. Everything in your feed or your chatbot response has been filtered through layers of optimization—what's good for the platform, what makes money, and what conforms to the platform's necessarily incomplete model of who you are.

Thinking About Thinking

Twenty-two years in global technology leadership has taught me something about systems: they shape behavior far more than we acknowledge. The tools we build eventually build us. The interfaces we design become the cognitive architecture through which users experience their work, their relationships, their world.

What Aleksic describes is that phenomenon at civilizational scale.

"TikTok has a limited idea of who you are as a user," he notes, "and there's no way that matches up with your complex desires as a human being."

And yet we scroll. We engage. We absorb. We mirror back.

The Only Defense

Aleksic's antidote is persistent self-interrogation:

Why am I seeing this? Why am I saying this? Why am I thinking this? Why is the platform rewarding this?

Simple questions. Difficult discipline.

"If you're talking more like ChatGPT," Aleksic concludes, "you're probably thinking more like ChatGPT as well, or TikTok or Spotify. If you don't ask yourself these questions, their version of reality is going to become your version of reality."

There's something almost spiritual in that warning. The ancient disciplines of self-examination—examine yourselves to see whether you are in the faith—take on new urgency when the voices shaping our inner dialogue aren't human at all.

The question isn't whether these tools are useful. They are. The question is whether we're using them—or being used by them.

Stay awake. Stay questioning. Stay real.

Friday, January 16, 2026

Decoding the Attack Vector: Entry Points in the Digital Build

 

Attack Vectors and Attack Surfaces

In the world of physical security, you don’t just worry about "theft"; you worry about the unlocked window, the side door with the faulty latch, or the delivery driver who isn't who they say they are. In cybersecurity, these specific pathways are our Attack Vectors.

An attack vector is simply the "how" and the "where" an adversary gains unauthorized access to your network. While the Attack Surface is the sum total of your exposure, the Vectors are the individual paths leading into the heart of the system.

The Common Vulnerabilities (The "Leaky Pipes")

Identifying attack vectors is the first step in hardening your infrastructure. Here are the primary culprits we see in the field:

  • Social Engineering & Phishing: This is the "human exploit." Instead of hacking the code, they hack the person. Whether it’s a credential-stealing link or a deceptive PDF attachment, this remains the #1 entry point for ransomware.

  • Account Takeovers (ATO): This happens when identity management fails. Stolen session cookies, brute-forced passwords, or credentials bought on the dark web allow attackers to walk through the front door as a "trusted" user.

  • The Insider Threat: Whether malicious (the disgruntled admin) or accidental (the dev who leaves an S3 bucket open), the threat from within is often the hardest to mitigate because the "vector" is already inside the perimeter.

  • Vulnerability Exploits (The Unpatched Flaw): Software isn't perfect. Bugs in code are like faulty locks. If you’re running unpatched "Zero-Day" vulnerabilities, you’ve essentially left a master key under the welcome mat.

  • Infrastructure Misconfigurations: Open ports are the digital equivalent of leaving the garage door open. If a port isn't serving a specific business function, it should be closed. Period.

  • Browser & Application Compromise: Because we live in a "Cloud-First" world, the browser is the new endpoint. Malicious scripts (XSS) or "poisoned" third-party apps can turn a standard web session into a bridge for malware.

Hardening the Perimeter: Practical Mitigation

You cannot eliminate every vector—the only 100% secure system is one that is turned off and buried in concrete. However, you can make the "cost of entry" too high for most attackers.

  1. Identity as the New Perimeter: Use MFA and session monitoring to kill the effectiveness of stolen credentials.

  2. Aggressive Patching: Automate your updates. A vulnerability is only a vector if it remains unpatched.

  3. Browser Isolation: Treat the public internet as "untrusted" by default. Executing code in a containerized environment keeps the mess off your local network.

  4. SASE (Secure Access Service Edge): As we move away from the traditional office, SASE integrates networking and security into a single cloud-native stack, closing the gap between the user and the app.

The Bottom Line

Think of your security posture like a building's blueprint. You can't remove every door, but you can ensure every door has a deadbolt, a camera, and a guard. By systematically identifying and closing off attack vectors, you shrink your Attack Surface and force the adversary to look for an easier target elsewhere.

Friday, January 9, 2026

Five Things You Should Know About IT Risk Assessment

 

 Five Things You Should Know About IT Risk Assessment

Every organization faces data security threats. Hackers get smarter, attacks become more common, and security budgets stay tight. You can't protect everything equally, so you need to identify your biggest weaknesses and address them first.

That's what IT risk assessment does. It helps you identify, assess, and prioritize data security risks so you can focus your time and budget where they matter most.

Here are five things worth knowing about it.
At the higher education institute where I work, we created a thoughtful exercise using a simple Excel spreadsheet to outline every area or department that meets twice a year to self-evaluate their risks and the likelihood of impact. If you are interested, take a look at the sample sheet that you can download for your organization to 

1. Risk assessment tells you where to focus your security efforts

Risk assessment and risk management sound similar, but they're different. Risk management is about controlling specific problems. Risk assessment is the bigger picture work of understanding all the threats you face, both inside and outside your organization.

Think of it this way: risk assessment helps you see the full map of dangers. Risk management is what you do about each one.

A good risk assessment might reveal misconfigured user permissions, forgotten active accounts, or admin rights that have become out of control. Once you know about these problems, you can fix them before someone exploits them.

2. Many regulations require it

If your organization must comply with regulations such as HIPAA or GDPR, you likely need to conduct risk assessments. These regulations don't tell you exactly how to protect your systems, but they do require you to have security controls in place and be able to prove it.

Skipping risk assessment doesn't just leave you vulnerable to attacks. It can also lead to failed audits and expensive fines.

3. Frameworks make it easier to get started

You don't have to invent your own approach. Several well-tested frameworks exist that tell you what to look at, who should be involved, how to analyze what you find, and what to document.

Three popular options are OCTAVE (created by Carnegie Mellon University), NIST SP 800-30, and ISO/IEC 27001:2013. Pick one that fits your organization's size and needs, then adapt it as necessary.

All of these frameworks expect you to document your process. This creates a paper trail showing you're taking security seriously.

4. You have to keep doing it

Risk assessment isn't something you do once and forget about. Your IT environment changes constantly. New software gets installed, employees come and go, and attackers find new tricks.

A risk assessment from two years ago won't catch the inactive account someone forgot to disable last month or the permissions that have gradually gotten out of hand.

Make risk assessment a regular habit, not a one-time project.

5. The process has three basic steps

Risk assessment breaks down into three parts:

Find the risks. Look for weaknesses in your systems. Users may have more access than they need, your password policies may be too weak, or old accounts are still active.

Estimate how likely each risk is. Not every weakness will actually cause a problem. Consider how probable it is that someone could exploit each vulnerability you found.

Decide what to tackle first. Combine likelihood with potential damage. A risk that's both likely and would cause severe harm warrants immediate attention. Something unlikely and minor can wait.

The Bottom Line

Threats don't stand still, and neither should your security planning. Regular risk assessment keeps your defenses aligned with current risks rather than yesterday's problems.

If your last assessment is collecting dust, your security strategy needs an update too.

Saturday, December 13, 2025

When Persistence Beats Protection: MFA Fatigue, Data Brokers, and Why Your Identity Was Already Stolen


I've been thinking a lot about exhaustion lately. Not the kind that comes from long hours or complex projects—the kind that attackers are deliberately weaponizing against us. And it's working.


The Attack That Exploits Human Nature


MFA fatigue attacks are a clever tactic in cybersecurity: an adversary who recognizes that the weakest part of any security system isn't the cryptography or the firewall. It's the person at 11 PM who just wants the notifications to stop.


Here's how it works: an attacker steals your credentials—probably through phishing, or from one of the countless breaches exposing nearly every American’s personal information over the past two years. They try to log in. Your phone buzzes with an MFA push notification. You decline it. Another notification. You decline again. Then another. And another. Midnight arrives. You're exhausted. The notifications keep coming.


Eventually, a large percentage of people just approve the request to stop it. The psychology is simple but devastating. We've trained users to respond to prompts. We've built muscle memory around tapping "Approve." And attackers have learned how to weaponize that conditioning. This leads me to a sidebar that matters more than it might seem.


Kevin Mitnick passed away in July 2023 from cancer. For those who don't know, Mitnick was once the most wanted computer criminal in the US—a social engineering pioneer who served five years in federal prison. What's worth remembering isn't just his criminal past but his transformation into one of the most respected white-hat security consultants.



One of my book's reviewers and a close friend, Andrew Starvitz, met Kevin Mitnick. He had a metal lockpick set as a business card—perfectly fitting for someone who spent his career showing that most security is just theater if you understand human nature. Andrew Starvitz also met Frank Abagnale Jr. at a Novell NetWare event, which dates us quite a bit. Abagnale's story—immortalized in "Catch Me If You Can”—follows a similar arc: extraordinary criminal ability redirected toward protecting systems he once exploited. These transformations remind us that understanding the attacker’s mindset isn't just of academic interest. It's critical. The best defenders often think like the people trying to break in.

Speaking of transformations and justice, Ross Ulbricht—founder of the Silk Road marketplace—received a full and unconditional pardon from President Trump in January 2025 after serving more than a decade of his double life sentence, plus forty years. Whatever your views on the case, Ulbricht's release reflects ongoing national discussions about proportional sentencing in tech crimes.


The breach that affected everyone


But here's a development that should keep you awake at night: the National Public Data breach of 2024.

NPD was a data broker—a company that collects, combines, and sells your personal data without your permission and largely without your knowledge. A cybercriminal known as "USDoD" compromised their databases starting in late 2023, exposing about 2.9 billion records and affecting over 272 million people. Names. Addresses. Social Security numbers. Phone numbers. Emails.


The company didn't publicly confirm the breach until August 2024, months after the data was already circulating on the dark web. The owner of Jerico Pictures, Inc.—which does business as National Public Data—is Salvatore Verini, Jr., a former Florida law enforcement officer. He was trusted with hundreds of millions of Americans' most sensitive personal info, stored it on insecure systems, and faced no real criminal consequences when it was stolen and leaked.


Let me be clear: in the past two years, almost every American’s personal information has been compromised through no fault of their own. Our government has been painfully slow at protecting consumers. Data brokers operate with little regulation, peddling our data with no accountability and no real way to opt out. You can check if your data was exposed at npd.pentester.com. Spoiler: it probably was.


The way forward: Passkeys


So, what can you do? Passkeys are the biggest leap forward in authentication security in decades. Passkeys replace passwords entirely with a public-private key pair linked to your device. When you verify your identity, your phone or computer uses biometric verification—your fingerprint, face, or PIN—to unlock a private key. This private key never leaves your device. No password to steal. No credentials to automate attacking.


The security is impressive: passkeys are resistant to phishing because they’re cryptographically tied to specific websites. They eliminate the threat of credential reuse. And they're more convenient because you use familiar authentication methods to unlock your device.

Major platforms now support passkeys. Google, Apple, Microsoft, and most big services have adopted them. Adoption is still early, but this is where authentication is headed.


The conclusion


MFA fatigue attacks succeed because our security systems rely on human vigilance, which ignores human limits. Data brokers have built an industry to gather and sell the info that enables these attacks, while laws lag far behind the threat. And breaches keep happening.


My recommended defense: turn on passkeys everywhere possible. Use number-matching MFA where passkeys aren't available. Never approve an auth request you didn't start—if you keep getting notifications, it’s an attacker, not a glitch. And assume your info is already compromised, because statistically, it probably is. We’re in an environment where identity theft isn't just a risk; it's a reality we must navigate. The real question isn't if your data is out there but how well you can limit the damage.

---------------

What's your experience with passkey adoption? Have you seen MFA fatigue attacks in your organization? Are the tools we're using keeping up with the threats?


Would love to hear your thoughts. Drop me a line at dr.samkm@protonmail.ch

Friday, November 28, 2025

The Architecture of Genius: How Elon Musk Built SpaceX on Failure (and Ignored 99% of the Experts)


After dissecting the Five-Strategy Framework in my last post (the one about scaling, remember?), I thought I was done with deep dives for the week. Then, during a mindless LinkedIn scroll—we all do it—this article about SpaceX’s collapse-to-conquest story absolutely snagged my attention. Full disclosure: I'm not here to fanboy over Elon Musk the person. But his sheer tenacity, that radical commitment to a First Principles engineering mindset, and the undeniable results of his leadership? Those are qualities I'm a permanent student of. So, let’s break down this alleged "GENIUS Framework" in my own words. I need to understand this architecture better, and maybe, just maybe, it’ll be the blueprint someone else needs today.


In 2008, SpaceX was a wreck. A financial black hole.


Three rockets. Three failures. $100 million gone. Elon Musk was down to his last $30 million, throwing it into the final launch.


The "experts" were unanimous: Cut costs. Play it safe. Pivot.


But Musk, ever the contrarian engineer, didn't just ignore 99% of the advice. He ignored the metric everyone else was tracking. He wasn't optimizing for profit margins, market share, or even successful launches.


He was obsessed with a single data point. The one that separates a pile of crumbling bricks from a towering skyscraper:


The Rate of Innovation.

That's it. How fast could his team iterate, learn, and improve compared to everyone else?

Musk treated engineering like a compounding asset. If SpaceX wasn't learning faster than NASA, they were, by definition, a dead company walking. This single-minded focus became the foundational architecture for the entire organization.


The Real Magic: Data from the Debris


This obsession created a culture where failure wasn't a funeral; it was precious data.


1. Flattened Hierarchy: Bureaucracy is a drag chute on speed. Musk killed the endless meetings and approval chains. The best idea—the one that moved the dial on the Rate of Innovation—won, no matter who proposed it.


2. Failure Analysis in Hours, Not Months: When a rocket failed, they didn't wait a year for a post-mortem report. They tore into the data in days, sometimes hours. While competitors were still fearing mistakes, SpaceX was celebrating the speed of their learning. By the time NASA figured out what went wrong on one test, SpaceX had already prototyped and tested three new solutions.


The ultimate takeaway? In this new culture, playing it safe was career suicide. The only true failure was not innovating. On September 28, 2008, the fourth Falcon 1 launch succeeded. It wasn't luck. It was the moment years of compressed learning finally paid off, laying the first solid brick in what would become a $350+ billion empire.


The GENIUS Framework: The Blueprint You Can Use

Musk’s strategy wasn't about being the smartest guy in the room (though he is). It was about constructing a system where learning and adaptation were the highest priorities.


Element Definition: The Architectural Principle How to Apply It
GGrind Fast Move fast. Launch fast. Learn fast. Perfection is the enemy of progress. Stop over-planning the perfect version 1.0. Get a Minimum Viable Product (MVP) out the door and iterate based on real feedback.
EEliminate Bureaucracy Kill the approval chains and flatten the hierarchy. Empower the engineers/doers on the ground to make quick, informed decisions without waiting for layers of sign-off.
NNormalize Failure Mistakes are not shameful; they are high-value feedback. Measure learning speed, not just success rate. If you fail fast and learn faster than your competitor, you are winning.
IIterate Relentlessly Use every single test, failure, or micro-feedback loop to immediately build version 2.0. Don't wait for quarterly reviews. Make iteration your continuous operating system.
UUnderstand the Core Problem Focus on first principles: "What is the fundamental problem we are solving?" Don't optimize a broken process. Deconstruct the problem down to its physics, and rebuild a better solution from the ground up.
SSpeed of Innovation > Size of Company Small, fast-learning teams will always beat slow, lumbering giants. Measure team effectiveness by their output velocity and learning curve, not their headcount.


The truth about company collapse is often overlooked: they rarely die because they run out of money immediately. They die because they stop learning. Elon Musk bet everything he had on the single, simple act of learning faster than anyone else on Earth. 

Final Thoughts: What can we learn from Elon Musk’s strategy?


                Don’t chase perfection - chase speed of learning.


                Flatten your process. Good ideas can come from anywhere.


                Build a culture where failure is feedback.


                Make iteration your superpower.


                Measure progress by rate of innovation, not just revenue.


The truth?

 
And that, my friends, is how you build a universe-changing business.

Thoughts this morning from South east Asia!