← Journal
Development

Securing a Web App When Security Is Not Your Job

12 March 2026·5 min read

My day job is data engineering. I can model a problem, untangle a broken pipeline, argue with SQL until it produces something sensible. What I am not, and have never pretended to be, is a security engineer. That's a different discipline entirely. Different vocabulary, different threat models, different long list of ways you can be confidently wrong.

When FairwayPlan went live, I'd done the obvious stuff. HTTPS via Let's Encrypt. Cloudflare in front. That felt like enough. Reader: it was not.

What follows is a plain-language account of what I found when I actually looked, what each thing meant, and what I changed. Not a comprehensive guide. Just an honest record of the gap between "it has HTTPS" and "it is actually secure."

The thing I was most embarrassed about

My FastAPI backend had this sitting in it:

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    ...
)

CORS is the browser's mechanism for controlling which external websites are allowed to make requests to your API. allow_origins=["*"] means any website, anywhere. For a purely public API, that's arguably fine. The problem is pairing it with allow_credentials=True.

What that combination actually does: it instructs the browser to include cookies and auth headers on cross-origin requests, while simultaneously allowing any origin to make them. In practice, a malicious site could fire off requests to my API on behalf of a logged-in user, and the browser would happily oblige. The fix is two lines: lock down the allowed origins to my own domain, flip allow_credentials to false.

I had copied this from a tutorial years ago and never questioned it. That's probably the single most common vector for security holes in side projects, code that was written for convenience and never revisited.

CORS with allow_origins=* and allow_credentials=True, copied from a tutorial, never questioned. That is how security holes survive in side projects.

What nginx was quietly exposing

FastAPI ships with built-in interactive API docs at /docs. Genuinely useful during development, you can poke at every endpoint, inspect inputs and outputs, trigger real requests from the browser. Great locally. Not something you want sitting open in production.

I had left it exposed. Anyone who found it could browse the full API surface and fire at will, including the itinerary generation endpoint, which kicks off a full solver run and weather pipeline lookup. Not catastrophic for FairwayPlan specifically. Still a bad habit.

Fix: disable the docs in production (docs_url=None in FastAPI) and pull the nginx location blocks that were forwarding /docs and /openapi.json.

Same problem, different surface: Umami, the analytics tool I self-host for traffic stats. I proxy it through nginx at /stats/, intention being to serve the tracking script and accept analytics events. What I'd actually done was proxy the entire Umami application, including the admin login at /stats/login. Anyone who knew to look could have a crack at my analytics dashboard.

The fix was surgical. Replaced the wildcard proxy with two exact-match locations covering only the two paths a browser legitimately needs: /stats/script.js and /stats/api/send. Everything else gets a 404.

Rate limiting

The itinerary generation endpoint is genuinely expensive to run. Full MIP solve, weather pipeline, a few hundred database queries. I had nothing stopping someone from hammering it in a loop.

nginx has rate limiting built in. Five lines of config: declare a zone, set a max rate, apply it to the endpoint, add a small burst allowance, return 429 when it's exceeded. Understanding the syntax took longer than writing it.

Ten requests per minute per IP, burst of three. For someone genuinely planning a trip that's acres of headroom. For an automated script, it's at least a speed bump.

Security headers

There's a standard set of HTTP response headers browsers use as instructions for handling your content. Mine had none of them.

The ones worth understanding for a site like this:

  • Strict-Transport-Security, tells the browser to always use HTTPS for this domain, even if someone types the URL without it. Shuts down downgrade attacks.
  • X-Frame-Options: DENY, stops your site loading inside an <iframe> on someone else's page. Blocks clickjacking, the attack where a malicious page layers invisible UI over yours to trick users into clicking things.
  • X-Content-Type-Options: nosniff, tells the browser to trust the content-type the server declared and not try to guess. Closes off a category of injection attack.
  • Content-Security-Policy, the powerful one, and the fiddly one. An explicit allowlist of where scripts, styles, images, and network requests are permitted to come from. Mine is conservative: same-origin for almost everything, with frame-ancestors 'none' doubling up on the X-Frame-Options directive.

All of these live in nginx as add_header directives. Zero runtime cost, no application changes required, meaningful reduction in attack surface.

The smaller things

A few more items, individually minor, collectively worth doing:

Share codes. FairwayPlan generates an 8-character code when you save an itinerary. I was using Python's random module for this, Mersenne Twister under the hood, which is fine for simulations and absolutely not fine for anything functioning as a token. Swapped it out for secrets.choice(), which draws from the OS cryptographic random source. Two-line fix.

Version disclosure. nginx puts a Server header on every response broadcasting its version number. Next.js sends X-Powered-By: Next.js. Neither is a direct vulnerability, but why hand anyone scanning for targets a free head start? server_tokens off in nginx, poweredByHeader: false in Next.js config. Done.

Booking links. Course URLs come from the database and land directly in href attributes. If one somehow had a javascript: scheme instead of https://, a browser would execute it on click. Added a small validator that checks the URL protocol before it ever reaches the DOM. Belt and braces.

Docker builds. Changed npm install to npm ciin the Dockerfile. The difference: npm install can quietly resolve slightly different dependency versions at build time. npm ci installs exactly what's in the lockfile and fails loudly if there's any discrepancy. Better for reproducibility, better for knowing what's actually running in production.

Final bits

Ports 80 and 443 on the droplet are now restricted at the firewall level. Only Cloudflare’s published IP ranges are allowed through, which means all traffic must pass through Cloudflare before it reaches the server. The origin can no longer be accessed directly by someone who knows the droplet IP.

This was implemented using a DigitalOcean firewall rule that limits inbound HTTP/HTTPS traffic to Cloudflare’s IP ranges.

The honest conclusion

Nothing I fixed was exotic. Most of it was configuration that should have been set from day one and wasn't, because I was focused on getting the thing working rather than locking it down. Reasonable priority order for a side project. But "it works" and "it's secure" are not the same bar.

A few hours and a few hundred lines of config. The gap between "it has HTTPS" and "it is actually secure" was wider than I expected.

The most useful thing I did was stop assuming Cloudflare and HTTPS were sufficient and actually look at what was exposed. A few hours, a few hundred lines of config. Worth it.