A.N
  • Home
  • Experience
  • Projects
  • Blog
  • Contact
Insights & Writing

The Blog

Thoughts on cybersecurity, AI, and the systems that connect them.

Featured
Apr 2026 Cybersecurity

What Is the Best Cybersecurity Software for Small Businesses?

A practical breakdown of the tools, platforms, and strategies that actually make sense for small businesses — without the enterprise price tag or complexity.

Read more →
Mar 2026 AI & ML

Will Cybersecurity Be Replaced by AI?

AI is transforming threat detection and response — but does that mean security professionals are headed for obsolescence? A nuanced look at where the industry is actually going.

Read more →
Feb 2026 Cybersecurity

Is Cyber Security Hard?

An honest take on what it actually takes to break into cybersecurity — the learning curve, the certifications, and what nobody tells you before you start.

Read more →
Jan 2026 Cybersecurity

Aman Navani: Engineering at the Intersection of AI and Cybersecurity

A look at the person behind the code — my background, what drives me, and how I ended up at the intersection of AI and cybersecurity.

Read more →
Jan 2026 Cybersecurity

Zero Trust Is Not a Product

Why zero-trust segmentation is a philosophy before it's a technology — and what that means for how organizations should approach implementation.

Read more →
Dec 2025 AI & ML

Agentic AI in Healthcare: Lessons from the Field

What I've learned building and integrating agentic AI modules into a live clinical platform — the wins, the edge cases, and the stakes.

Read more →
Nov 2025 Engineering

The Effect of Machine Learning Models in Intrusion Detection Systems

An exploration of how supervised and unsupervised ML approaches are reshaping how we detect, classify, and respond to network intrusions.

Read more →
← Back to Blog

I get this question a lot — usually from friends running small businesses or founders who just got their first office and suddenly realized they have no idea how to protect it. So let me break this down the way I wish someone had broken it down for me early on.

First, Ignore Most "Best Of" Lists

Seriously. Most of those roundups are written for enterprises with massive budgets and full security teams. If you're a 15-person company where the "IT guy" is also running operations, you don't need a $50K SIEM platform. You need stuff that works, that's simple to manage, and that your team will actually use.

Because here's the thing — the most expensive security tool in the world is useless if nobody configures it properly or if it's so complex that people just ignore it.

Endpoint Protection — Start Here

Your laptops, desktops, and phones are the front line. Every small business needs some form of endpoint detection and response (EDR). I'd look at something like CrowdStrike Falcon Go, SentinelOne, or honestly even Microsoft Defender for Business if you're already in the Microsoft ecosystem. They all offer real-time threat detection and automated remediation without needing a dedicated analyst babysitting a dashboard.

The key thing I tell people: pick the one with the dashboard you can actually read. If the UI confuses you, you won't use it.

Email Security — This Is Where Attacks Actually Happen

Phishing is still the number one way small businesses get compromised. It's not even close. I've seen it over and over. Someone clicks a link in what looks like a legit invoice email, and suddenly you've got a real problem.

If you're on Microsoft 365 or Google Workspace, you've got basic filtering built in — but basic isn't cutting it anymore. Layer on a dedicated email security tool that rewrites URLs, scans attachments in a sandbox, and catches those sneaky business email compromise attempts where someone impersonates your CEO asking for a wire transfer.

Network Security — Think Zero Trust (Even at a Small Scale)

I work in zero-trust segmentation at Illumio, so I'm obviously biased here — but the core idea applies to businesses of every size. The old model of "firewall at the edge, trust everything inside" is broken. One compromised laptop shouldn't give an attacker access to your entire network.

You don't need to go full enterprise zero-trust on day one. Start practical: segment your network so your guest Wi-Fi is completely separate from your business systems. Enforce multi-factor authentication on every single account — no exceptions. Use a DNS-layer security tool to block known malicious domains before they even load. These are easy wins that make a huge difference.

Backups — The Thing Everyone Forgets Until It's Too Late

Ransomware doesn't care how big your company is. In fact, attackers increasingly target small businesses specifically because they know you're less likely to have solid backups. I can't stress this enough: automated, encrypted, offsite backups. Test them regularly. If you can't restore your systems in under four hours, your backup strategy needs work.

So What's the "Best" Software?

There isn't one. The best cybersecurity posture for a small business is a layered approach — endpoint protection, email security, network segmentation, identity management, and backups. Each one chosen for simplicity and effectiveness at your scale.

Invest in tools your team will actually use, train your people to recognize social engineering, and build security into how you operate — not just what you install. That's the real answer, even if it's not as satisfying as a single product name.

If you've got questions about your specific setup, feel free to reach out. I'm always happy to talk through this stuff.

I've been asked some version of this question at every networking event, family dinner, and LinkedIn DM for the past year. So let me just put my thoughts down here once and for all.

Short answer: no. Longer answer: it's complicated, and the reality is way more interesting than the hot takes.

What AI Is Already Doing (And Doing Well)

Let's give credit where it's due. AI is genuinely transforming parts of cybersecurity. ML models are powering threat detection systems that can spot anomalous network behavior in milliseconds — way faster than any human analyst scrolling through logs at 2am. NLP is being used to chew through threat intelligence feeds. Automated response platforms can isolate a compromised endpoint before anyone even gets paged.

I've seen this firsthand. My graduate thesis focused on how ML models perform in intrusion detection systems, and the results are real. AI catches patterns humans miss. That's not hype — it's math.

But Here's What People Get Wrong

Cybersecurity isn't just pattern matching. It's adversarial. You're not solving static problems — you're fighting people. Smart, creative, motivated people who study your defenses, adapt their tactics, and exploit the gap between what your model was trained on and what's actually happening right now.

AI struggles with truly novel attack vectors. It struggles with context — understanding why a particular behavior is suspicious for this specific user, at this specific time, in this specific organization. It can't negotiate with a ransomware gang. It can't make judgment calls about public disclosure timing. It can't navigate the political minefield of a post-breach boardroom meeting.

And honestly? Some of the hardest parts of security are fundamentally human — convincing a CEO to invest in infrastructure nobody sees, training employees not to click that link, building a security culture that doesn't just rely on tools.

What's Actually Happening: Augmentation

The way I see it, AI is replacing the parts of cybersecurity that burn people out. The alert fatigue. The endless log analysis. The repetitive triage at 3am. Good riddance, honestly. What's left is the work that's actually interesting — strategy, architecture, threat hunting, and the adversarial thinking that machines genuinely can't replicate.

The professionals who thrive will be the ones who learn to work alongside AI, using it as a force multiplier rather than viewing it as competition. And the demand for people who understand both AI and security? That's going up, not down. The attack surface is expanding way faster than AI can secure it on its own.

My Honest Take

AI will replace certain tasks in cybersecurity. It will not replace the discipline. If anything, the field is getting more complex and more in-demand, not less. If you're in security and worried about AI taking your job — learn to use it. Make it your advantage. The people who combine deep security knowledge with AI fluency are going to be incredibly hard to replace.

And if you're considering getting into cybersecurity and wondering if there's still a future in it — there absolutely is. The robots need us more than we need them. For now, at least.

I'm going to be straight with you because I think most content about "breaking into cybersecurity" is either overly intimidating or unrealistically optimistic. The truth is somewhere in the middle, and I think my own path is a decent example of what it actually looks like.

Yeah, It's Hard. But Not How You Think.

Cybersecurity isn't hard because you need to be some kind of genius. It's hard because the field is wide. Like, really wide. It touches networking, programming, operating systems, cryptography, compliance, risk management, cloud infrastructure, and even psychology. Nobody masters all of it. And honestly, nobody expects you to.

When I first started, the volume of things to learn was genuinely overwhelming. Zero trust, SIEM, SOC, pen testing, threat modeling, incident response — every term opened up another rabbit hole. And the field moves fast. A vulnerability disclosed on Monday can be weaponized by Wednesday. You can't just learn it once and be done.

What Actually Gets You Through

Here's what I wish someone had told me earlier: the people who do well in cybersecurity aren't the ones who memorized every CVE or can recite RFCs from memory. They're the ones who are genuinely curious, who ask good questions, and who don't give up when something doesn't click right away.

You don't need a CS degree to start. You don't need to be a coding wizard — though being comfortable with Python helps a lot. Many security roles lean more on analysis, communication, and risk thinking than on writing code. I came through an M.S. in Cybersecurity Management, and a lot of what I learned was about process, governance, and how to think about risk at an organizational level. There's a path in for a lot of different backgrounds.

Certs — Useful, Not Magic

I have my CompTIA Security+ and AWS Certified Developer, and they were genuinely worth the effort. Not because they made me an instant expert, but because they forced me through a structured learning path covering topics I might've otherwise skipped. They're great for getting past the resume filter and into interviews.

But don't fall into the certification treadmill trap. I've met people with a wall of certs and no hands-on experience. Set up a home lab. Break things on purpose. Do CTF competitions. Contribute to open-source projects. The doing matters way more than the credential.

The Part Nobody Warns You About

The technical side is genuinely learnable. Put in the time, stay curious, and you'll get there. The part that's actually hardest? The emotional weight. You're often the person in the room saying "slow down" or "we can't do that" when everyone else wants to move fast. You carry the constant awareness of what could go wrong. When a breach happens — and eventually one will — the pressure is intense and personal.

But I'll also say this: it's one of the most rewarding careers in tech. The problems are real. The work matters. You're protecting people, businesses, and critical systems. And it is genuinely never boring.

So, Should You Do It?

If you're the kind of person who's curious about how systems work — and more importantly, how they break — cybersecurity might be a great fit. It's not easy, but it's deeply satisfying. The learning never stops, which is either terrifying or exciting depending on your personality. For me, it's the best part.

If you're thinking about making the jump and want to talk it through, hit me up. I'm always down to chat about this stuff.

I work at Illumio, a company that literally builds zero-trust segmentation technology. So believe me when I say — zero trust is not something you buy. It's something you build. And that distinction matters more than most people realize.

How We Got Here

Somewhere along the way, "zero trust" became a marketing term. Every security vendor started slapping it on their product pages. Firewalls? Zero trust. VPNs? Zero trust. Identity platforms? Zero trust. It's gotten to the point where the phrase almost means nothing, which is a shame because the underlying idea is genuinely important.

What It Actually Means

At its core, zero trust is a philosophy: never trust, always verify. No user, device, or workload gets implicit access to anything. Every connection is authenticated, authorized, and continuously validated. It's a fundamental shift in how you think about your security posture — not a box you install.

What I see every day at Illumio is the practical side of this. Zero-trust segmentation is about controlling how workloads communicate, so that even if an attacker gets past your perimeter, they can't move laterally through your environment. But the technology only works if the organization has actually embraced the principle behind it. You can deploy the best segmentation platform in the world, and it won't help if your teams don't understand why they're doing it.

It's a Journey, Not a Switch

You don't wake up one morning and "become" zero trust. It's a progressive transformation that touches identity, network architecture, endpoint security, application design, and — maybe most importantly — culture. The organizations I've seen succeed start by mapping their critical assets, understanding their data flows, and then incrementally layering in micro-segmentation and least-privilege access.

The ones that fail? They're looking for a shortcut. They want a single vendor solution that lets them check a compliance box and move on. That's not how this works.

Where I'd Start

If you're at the beginning of this journey, focus on three things:

  • Identity — enforce MFA everywhere. Adopt conditional access policies. Start moving toward passwordless if you can.
  • Segmentation — stop treating your network as one flat, trusted zone. A breach in one area shouldn't give an attacker the keys to everything.
  • Visibility — you can't protect what you can't see. Map your environment. Monitor your traffic. Understand your actual attack surface, not just the one you think you have.

Zero trust isn't a destination — it's an operating model. The sooner more organizations internalize that, the better their security posture will be. And no, buying a product with "zero trust" in the name doesn't count.

Most of the AI-in-healthcare conversation is about chatbots answering patient questions. That's fine, but it's not where the real transformation is happening. I've been working on something much more interesting — and much harder — and I wanted to share some of what I've learned.

What We're Actually Building

At VMO Tech, I helped build a cross-platform mobile app that's used by over 100 clients and 10+ physicians. It handles health assessments, lab results, imaging — and at its core, there's an AI-powered predictive model providing real-time clinical guidance. More recently, I've been leading the integration of agentic AI modules — systems that don't just respond to prompts but autonomously recommend next-step clinical insights and shape how patients interact with their care teams.

This isn't a demo or a proof of concept. These are systems in active clinical use, making recommendations that real physicians review and act on every day. That changes the engineering calculus completely.

The Stakes Hit Different

Building agentic AI for healthcare is fundamentally different from building it for, say, e-commerce or customer support. If a shopping recommendation is bad, someone buys a shirt they don't like. If a clinical recommendation is wrong, the consequences can be serious. A missed flag, an inappropriate treatment suggestion, a false sense of confidence — the margin for error is incredibly thin.

That's why every module we deploy goes through rigorous validation, and there's always a physician in the loop. The temptation with agentic AI is to keep giving it more autonomy. In healthcare, I've learned that restraint is a feature, not a limitation. You earn the right to more autonomy by proving accuracy over time.

Things I've Learned the Hard Way

Data quality is everything. I spent a significant chunk of my time engineering datasets — cleaning, validating, structuring clinical data — before any model touched it. Garbage in, garbage out isn't just a cliché in healthcare. It's a patient safety issue.

Trust is earned, not assumed. Physicians are rightfully skeptical of AI. We earned adoption by starting with low-risk recommendations, proving accuracy over months, and always making it trivially easy for clinicians to override or dismiss a suggestion. The moment you make a doctor feel like the AI is making decisions for them, you've lost them.

The interface is half the battle. A brilliant prediction buried in a cluttered UI is a wasted prediction. We obsessed over when, where, and how insights surfaced, because clinical attention is a genuinely scarce resource. If the information isn't in the right place at the right time, it might as well not exist.

Where This Is Going

Agentic AI in healthcare will get more capable and more autonomous over time. But the path there is gradual, evidence-based, and has to stay human-centered. The organizations that rush to automate clinical decisions without first building trust and validation infrastructure will fail — and in healthcare, failure has real human cost.

I'm excited about where this is headed. But I'm also glad the field is moving carefully. Some things are worth getting right.

This was the topic of my graduate thesis, and honestly, it's still something I think about all the time. The intersection of ML and network security is where a lot of the most interesting work in cybersecurity is happening right now, and I wanted to break down what I found in a way that's accessible even if you're not deep in the research.

The Problem with Traditional IDS

Traditional intrusion detection systems work on signatures — they match network traffic against a database of known attack patterns. If traffic matches a known signature, it gets flagged. Simple and effective for known threats.

The problem is obvious: if the attack is new, the signature doesn't exist yet. And as attacks get more sophisticated and zero-day exploits become more common, the gap between "what we know about" and "what's actually happening" keeps growing. That's where machine learning comes in.

Supervised vs. Unsupervised: The Trade-Off

In my research, I looked at both supervised and unsupervised approaches, and each has real strengths and real limitations.

Supervised models — random forests, SVMs, deep neural networks trained on labeled datasets of normal and malicious traffic — are great at classifying known attack types. When the training data is representative, accuracy is impressive. But they struggle with what they haven't seen. If your training set doesn't include a specific attack variant, the model can miss it completely.

Unsupervised models — clustering, autoencoders, anomaly detection — take the opposite approach. They learn what "normal" looks like and flag anything that deviates. This makes them better at catching novel attacks, but they come with higher false positive rates. And in a production SOC, alert fatigue from false positives can be just as damaging as missed detections. I've seen teams that literally start ignoring alerts because there are too many — which defeats the entire purpose.

The Hybrid Approach That Actually Works

The most promising results I found came from combining both. Use supervised classifiers for known threat categories where you want high accuracy, and layer unsupervised anomaly detection on top to catch the unknowns. The trick is tuning the balance so the unsupervised layer supplements the supervised one without flooding analysts with noise.

It sounds straightforward on paper, but getting the thresholds right in practice is genuinely hard engineering work. And it's not a set-it-and-forget-it situation — the models need to be continuously retrained as network behavior evolves.

The Gap Between Research and Production

One thing I want to be honest about: academic benchmarks can be misleading. A model that hits 99% accuracy on a curated dataset might perform very differently on live, encrypted, messy production traffic. You're dealing with massive data volumes, concept drift, encrypted payloads you can't inspect, and adversaries who are actively trying to evade your detection.

That gap between research and deployment is where the real engineering challenge lives. It's not glamorous work, but it's where the impact actually happens.

Where I Think This Goes

ML-powered intrusion detection isn't experimental anymore. It's in commercial products, running in real SOCs, catching real threats. But it's not replacing skilled analysts — it's making them better. The future is human expertise augmented by machine intelligence, each covering the other's blind spots.

That's what got me excited about this space in the first place, and it's only getting more interesting. If you're working on similar problems or just want to nerd out about IDS architectures, I'm always up for that conversation.

© 2026 Aman Navani — All rights reserved