How I Used ChatGPT and Codex to Add a New Security Layer to My Homelab

From Firewall to Mini-SOC

 

How I Used ChatGPT and Codex to Add a New Security Layer to My Homelab

I already had a structured lab with segmentation, remote access, and a clear network design. What I wanted next was to add another layer of security: not another layer of blocking, but another layer of visibility.

I am also heading toward the Security+ exam, so this project fit perfectly into what I want to learn anyway: not just how to install tools, but how to think about monitoring, visibility, architecture, and controlled rollout. 

The target was a small but real detection pipeline:

OPNsense firewall
→ Suricata intrusion detection
→ Syslog forwarding
→ Wazuh SIEM
→ searchable alerts in a dashboard

This is the kind of chain you would find, in much larger form, in a Security Operations Center, usually shortened to SOC. A Security Operations Center is the part of an organization that watches security events, reviews alerts, and investigates suspicious activity. In a large company, that might mean analysts, ticket queues, and expensive enterprise tools. In a homelab, it can be a simpler version of the same idea: detect traffic, collect logs, centralize them, and make them visible.

This blog entry is about how I built that layer into my network with help from ChatGPT and Codex. Not as some magical black box, and not as “AI doing everything for me,” but as a structured collaboration between planning, execution, and human validation.


Architecture Overview

 

Before this project, the network already had a clear structure.

Internet access comes from a FritzBox. Behind that sits an OPNsense firewall, running as a virtual machine on my Proxmox host Athena. OPNsense separates the Home LAN from my internal management zone.

The simplified path looks like this:

Internet
   ↓
FritzBox
   ↓
Home LAN (192.168.178.0/24)
   ↓
OPNsense WAN
   ↓
OPNsense LAN / VLAN10 (192.168.10.0/24)
   ↓
Management systems
 

Inside VLAN10 live the systems that currently matter most in the lab: infrastructure, monitoring, test systems, and the new SIEM server that would later become wazuh01.

This detail mattered. At several points in the project it would have been easy to drift into “future architecture” thinking and focus on less relevant segments. But the real priority was VLAN10, because that is where the meaningful systems already were. That became one of the first management decisions in the project: monitor the part of the environment that already matters, not the part that might matter later.

Remote administration was already done through WireGuard VPN, and the firewall policy was based on default deny. That means traffic is blocked unless explicitly allowed. So the preventive security layer already existed. What I wanted now was the detective layer on top of it.

In simple words: the firewall already controlled access. I wanted the network to tell me when something interesting happened.


The Team: Daniel, ChatGPT, and Codex

What makes this project interesting to me is not just the technology. It is also the way it was run.

This was effectively a three-part team.

Daniel — meaning me — had the real environment, the real constraints, and the final responsibility. I knew which systems mattered, which things were negotiable, and which were not. I also had to validate results in the live environment, create the screenshots for the final documentation, and added my personal touch.

ChatGPT acted as the planning and management layer. That included architecture decisions, sequencing, risk thinking, trade-offs, and translating vague goals into controlled project steps. It also meant beginner-friendly explanations, which mattered to me because I wanted to understand the infrastructure, not just copy commands.

 

Codex acted as the execution layer. Once a task was clearly defined, Codex could do the technical work quickly and systematically: checking systems, applying configuration, verifying services, collecting evidence, and writing technical summaries.

 

The workflow looked like this:

Daniel
defines priorities, constraints, validation, screenshots, final responsibility
↓
ChatGPT
helps with architecture, sequencing, and management decisions
↓
Codex
executes the technical steps and documents the evidence
↓
Daniel
reviews the outcome in the real lab

That separation turned out to be extremely practical. Some of this work progressed while I was physically sitting in an AEVO workshop, not at home manually clicking through interfaces. So this was not “AI replacing admin work.” It was much closer to “AI helping one person manage infrastructure work more productively while still keeping control and validation in human hands.”

That matters to me because it lines up exactly with how I describe my work now:

IT administration trainee with hands-on systems, networking, documentation and automation skills, using AI carefully for productivity while validating outputs and securing real systems.

That sentence sounds good on LinkedIn, but I wanted a project that actually proves it. This is one of those projects.


Security Design Decisions

Before diving into commands, it is worth highlighting that this project was not just a tool install. It was a sequence of deliberate design decisions.

IDS before IPS

We started with IDS, meaning Intrusion Detection System, not IPS, meaning Intrusion Prevention System.

An IDS watches traffic and generates alerts. An IPS goes further and can block traffic inline.

That sounds attractive, but it is also how people accidentally break their own environment while trying to improve security. So the decision was:

Visibility first
Blocking later, if needed

Dedicated Wazuh VM

Wazuh was deployed on its own dedicated virtual machine instead of being squeezed onto the firewall or into some unrelated existing server.

That kept the architecture cleaner:

OPNsense = firewall and detection source
Wazuh = central analysis and monitoring system

VLAN10 first

Even though other segments existed in the broader lab design, the meaningful monitoring priority at this point was VLAN10, because that is where the active infrastructure lived. ChatGPT thought my extra VLAN30 is the most important part but at the moment the snapshot was taken, that was just a minor lab environment separated from the actual "management" subnet. So the project focused on what was operationally important now.

No unnecessary hardening detours

Some generic security checklists suggest things that are technically fine in abstract, but not always helpful in context.

For example, SSH keys were discussed, but I intentionally postponed them because I currently log in from multiple different hosts and did not want to introduce that complexity right now in the lab. We secured everything with vpn access through two firewalls when entering from the outside. Similarly, a host firewall on the Wazuh VM would have added more friction than benefit because the VM already sits behind OPNsense inside the management network. See my post on package drops for more on that.

That pattern repeated throughout the project:

  • define the real environment

  • challenge assumptions

  • choose what fits the environment

  • automate only after the decision is clear

That, to me, is where AI becomes useful: not in replacing judgment, but in helping structure it.


Step 1: Enabling Suricata Without Breaking the Network

 

The first security component was Suricata, which is an Intrusion Detection System.

An intrusion detection system watches network traffic and compares it to known suspicious patterns called signatures. In detection mode, it does not block anything. It just logs and alerts.

That was exactly what I wanted at this stage.

Suricata was enabled on OPNsense in pcap mode, which means detection-only operation rather than inline blocking. It was configured to monitor both the WAN-facing and LAN-facing interfaces.

Command:

configctl ids status

Output:

suricata is running as pid 11489

To verify the actual settings, the configuration was checked directly.

Command:

sed -n '828,875p' /conf/config.xml

Output excerpt:

<enabled>1</enabled>
<mode>pcap</mode>
<interfaces>wan,lan</interfaces>
<syslog_eve>1</syslog_eve>

For a beginner, that means:

  • Suricata is turned on

  • it is running in listen-only mode

  • it watches both WAN and LAN

  • and it exports events in a format that can later be forwarded elsewhere

At that point, the firewall gained visibility without changing how packets were handled. That was the safe rollout I wanted.


Step 2: Proving That Suricata Detects Real Events

A running service is not yet a proven control.

So the next step was to verify that Suricata was not just enabled, but actually producing meaningful alerts.

First, the pipeline was validated with a temporary local test rule just to prove packet capture, rule matching, and alert logging worked end to end. That was useful, but it was still only a pipeline check.

The stronger validation came from a real Emerging Threats rule.

The Emerging Threats ruleset is a widely used Suricata signature collection. It contains patterns for suspicious traffic seen in the real world. One of those rules flags DNS queries to certain top-level domains.

A safe DNS query triggered one of those live rules.

Command:

dig @192.168.10.1 easylist.to A

From /var/log/suricata/eve.json:

"signature": "ET DNS Query for .to TLD",
"signature_id": 2027757,
"src_ip": "192.168.10.20",
"dest_ip": "192.168.10.1",
"proto": "UDP",
"dest_port": 53

This does not prove that a real attack happened. It proves something more important for this phase: that the live Suricata ruleset was actively detecting and logging a real rule match.

That distinction matters. I did not want to oversell the event. I wanted a validated pipeline.


Validation Evidence

At this stage, the same alert logic had already been confirmed in multiple layers.

The event was tied to:

signature: ET DNS Query for .to TLD
signature_id: 2027757
proto: UDP
destination port: 53

This was the point where Suricata stopped being “installed software” and started becoming a functioning detective control.


Step 3: Deploying Wazuh as the SIEM Layer

Once Suricata was working, the next layer was Wazuh, which acts as the SIEM in this setup.

A SIEM, short for Security Information and Event Management, is a system that collects logs from different sources, parses them, and makes it possible to search, correlate, and review them centrally.

Wazuh was deployed on a dedicated virtual machine:

Name: wazuh01
IP: 192.168.10.20
OS: Ubuntu Server 24.04 LTS
CPU: 4 vCPU
RAM: 8 GB
Disk: 120 GB

The installation used the official assisted installer.

Command:

curl -sS -O https://packages.wazuh.com/4.14/wazuh-install.sh
sudo bash ./wazuh-install.sh -a

After installation, the services were verified.

Command:

systemctl is-active wazuh-manager wazuh-indexer wazuh-dashboard

Output:

active
active
active

The dashboard became reachable here:

https://192.168.10.20

 

This was an important milestone because the lab now had a real analysis platform waiting on the other side of the firewall.


Step 4: Connecting OPNsense to Wazuh

Now came the part where the rabbit hole got a little deeper.

Suricata on OPNsense already produced EVE JSON events, and OPNsense could already forward logs via Syslog. So instead of overengineering the integration, the chosen path was intentionally simple:

OPNsense
→ Suricata EVE events
→ syslog forwarding
→ rsyslog on wazuh01
→ local log file
→ Wazuh ingestion
→ decoder + rule
→ Wazuh alert

On OPNsense, the syslog destination was updated to point to 192.168.10.20 on UDP 514.

After that, syslog was restarted.

Command:

configctl syslog restart

Status check:

configctl syslog status

Output:

syslog_ng is running

On wazuh01, rsyslog was configured to listen on UDP 514.

Command:

ss -ulnp | grep ':514'

Output:

udp   UNCONN 0 0 0.0.0.0:514
udp   UNCONN 0 0 [::]:514

Incoming Suricata-related lines were filtered into:

/var/log/opnsense-suricata-eve.log

Wazuh then watched that file through localfile ingestion, using a custom decoder and local rule to recognize those forwarded Suricata events and raise alerts.

This part is also where the AI workflow became very visible. I did not need to manually invent every parsing detail from scratch. ChatGPT helped structure the architecture and sequencing, Codex handled the implementation and verification steps, and I still reviewed whether the chosen method made sense for my environment.

That is the operational version of “using AI carefully for productivity.”


Step 5: End-to-End Detection Confirmed

This was the real finish line for the project.

The same event had to be visible in multiple places.

First, on OPNsense inside Suricata’s own log:

/var/log/suricata/eve.json

Second, on the Wazuh server after forwarding:

/var/log/opnsense-suricata-eve.log

Third, inside Wazuh’s own alert pipeline:

/var/ossec/logs/alerts/alerts.json

And that is exactly what happened.

The same Suricata event for ET DNS Query for .to TLD with signature ID 2027757 was observed on OPNsense, forwarded to wazuh01, and converted into a Wazuh alert by the custom decoder and rule.

So the working chain became:

real network event
→ detected by Suricata
→ forwarded by syslog
→ received by rsyslog
→ ingested by Wazuh
→ visible as a Wazuh alert

That is the point where this stopped being “a few services installed on some VMs” and became a functioning security monitoring pipeline.

This is taken a day later and the rule is already showing multiple events

 This is taken a day later and the rule is already showing multiple events

Validation Evidence: End-to-End Proof

The same Suricata-originated event was confirmed in three places:

StageLocation
Suricata detection on OPNsense/var/log/suricata/eve.json
Syslog forwarding on wazuh01/var/log/opnsense-suricata-eve.log
Wazuh alert pipeline/var/ossec/logs/alerts/alerts.json

Sanitized event summary:

signature: ET DNS Query for .to TLD
signature_id: 2027757
src_ip: 192.168.10.x
dst_ip: 192.168.10.1
proto: UDP
dst_port: 53

This kind of validation matters because it proves the whole flow, not just one component.

 


Why This Is More Than Just a Tool Install

Technically, yes, this is a homelab security build.

But for me it is also a project about AI-assisted administration with accountability.

The useful part is not that an AI can produce commands. The useful part is that a person can use AI to move faster while still making informed decisions.

That means:

  • deciding that VLAN10 matters more than chasing future lab theory

  • refusing unnecessary hardening steps when they do not fit the environment

  • separating architecture from execution

  • keeping rollback paths

  • validating each stage before moving on

  • keeping the final responsibility with the human operator

That is the operating model I wanted to practice.

You could summarize the role split like this:

Daniel:
real environment, priorities, validation, screenshots (I am looking to get that delegated soon as well), final responsibility

ChatGPT:
architecture, sequencing, risk thinking, management decisions, explanations

Codex:
fast implementation, verification, technical evidence, documentation groundwork

And because I am working toward the Security+ exam, this kind of project is especially useful. It turns abstract concepts like firewalling, IDS, SIEM, centralized logging, and layered security into something visible and concrete.


Lessons Learned

The biggest lesson is that structure matters more than speed.

It would have been easy to rush into Wazuh, enable too much, break something, and spend hours debugging. Instead, the project worked because it was staged carefully.

First the environment was assessed.
Then Suricata was enabled safely.
Then a real live-rule alert was validated.
Then Wazuh was deployed.
Then the forwarding and parsing path was connected.
Then the entire chain was proven.

The second lesson is that AI is most useful when it is used as a team member with a role, not as a magical authority.

The third lesson is that even a relatively small homelab can model a professional security pattern:

preventive control
→ detective control
→ centralized monitoring
→ evidence

That is a much more interesting story than “I installed another dashboard.”



Where the Rabbit Hole Goes Next

This entry ends at exactly the right point.

The detection pipeline now works. Suricata sees real events. Wazuh ingests them. The firewall, IDS, forwarding path, and SIEM all work together.

Now the fun part begins. Preparing for Security+ in 2 weeks....

Next, ChatGPT gets to teach me how to actually use Wazuh, not just install it. We will start exploring the dashboard properly, understanding what the alerts mean, improving the parser logic where needed, and automating more of the surrounding workflow. That includes continuing to improve the infrastructure itself, not just admiring that the services are running.

And once that foundation is solid, the next blog entry will move from detection plumbing to controlled suspicious activity and attack-style demos, so that the mini Security Operations Center has something more interesting to investigate than a single DNS test.

For this phase, though, the important result is already there:

the network gained another layer of security, and it happened through a collaboration between human judgment, AI planning, and automated execution.