Every enterprise Wi-Fi project starts the same way: a multi-floor campus, thousands of employees, a mix of corporate laptops, personal phones, conference room hardware, and a growing fleet of IoT sensors. The existing wireless infrastructure is aging, controller-based, and showing its limits. The mandate is straightforward. Deploy Meraki cloud-managed access points, support the latest security standards, and keep it simple enough that the ops team won’t need a war room every time something goes wrong.
What follows is months of design work, lab testing, and a deployment that teaches you more about enterprise Wi-Fi than any certification ever will. This is that story.
Starting with Requirements, Not Hardware
The instinct on any wireless project is to start picking access points. Resist it. Hardware is the last decision, not the first. Start with a requirements document that forces every stakeholder to say what they actually need.
The corporate SSID had to support WPA3-Enterprise with 802.1X authentication, broadcast only on 5 GHz and 6 GHz bands, and enforce mandatory DHCP. The guest SSID needed captive portal authentication, open or OWE encryption, and tight pre-authentication controls. Before authenticating, users should only be able to reach DNS, DHCP, and the login portal. Nothing else.
Both SSIDs had to switch traffic locally at the AP. No tunneling back to a centralized controller. This was a deliberate architectural choice: in a cloud-managed model, the access point is both the radio and the switching point. Client traffic lands on the local VLAN at the AP’s switchport, which means the wired infrastructure under the AP matters just as much as the radio configuration above it.
You’ll also need 6 GHz coverage for Wi-Fi 6E clients, BLE beacons for location services, Adaptive Policy (SGT-based segmentation) from AP to switch, and integration with your existing ISE deployment for authentication and authorization. Every one of these requirements shapes the design in non-obvious ways.
The Data Plane: Why Local Switching Changes Everything
Traditional enterprise Wi-Fi funnels all wireless client traffic through a central controller via CAPWAP tunnels. The controller terminates the tunnel, applies policies, and switches the traffic onto the wired network. It’s clean, centralized, and creates a chokepoint.
Meraki cloud-managed APs flip this model. There is no on-premises controller. The Meraki dashboard handles configuration, monitoring, and firmware. That is the control plane. But the data plane is entirely local. When a laptop on the corporate SSID sends a packet, the AP tags it into VLAN 100 (Data) and puts it directly onto the switchport. When a guest sends a packet, it goes into VLAN 200 (Guest). The traffic never touches a controller because there isn’t one on-site.
This has a cascading effect on the wired design. Every AP switchport must be a trunk carrying the services VLAN, the corporate data VLAN, and the guest VLAN:
- VLAN 300 is Services and AP management (native VLAN on the trunk)
- VLAN 100 is Data for corporate wireless
- VLAN 200 is Guest wireless
The switch port configuration is no longer the simple access port you’d use for a CAPWAP AP. It’s a trunk that must also handle 802.1X or MAB authentication for the AP itself. That tension between needing an access port for authentication and a trunk port for data is what drives the use of ISE-based interface templates. The AP authenticates on an access port, ISE sends back an authorization profile that applies a template, and the template flips the port to trunk mode with the right allowed VLANs. It’s elegant, but it took three rounds of lab testing to get the timing right.
template meraki_ap
switchport mode trunk
switchport trunk native vlan 300
switchport trunk allowed vlan 100,200,300
access-session host-mode multi-host
The multi-host setting is critical. Without it, the switch authenticates only the first MAC on the port, which is the AP itself, and blocks everything else. Since the AP is bridging client traffic with different source MACs, multi-host ensures those frames aren’t dropped.
Authentication: When APs Become the NAD
In a controller-based architecture, the wireless LAN controller is the Network Access Device (NAD). It talks to ISE, processes RADIUS exchanges, and handles CoA (Change of Authorization) messages. Clients roam between APs, but the controller is the stable RADIUS endpoint.
In Meraki cloud-managed mode, each AP is its own NAD. When a client connects to the corporate SSID and does 802.1X, the AP talks directly to ISE. When ISE wants to send a CoA to bounce a session after a policy change, it sends that CoA to the specific AP hosting that client.
This works, but it creates a conflict with fast roaming. IEEE 802.11r allows a client to pre-compute encryption keys with neighboring APs, enabling sub-50ms roams. But 802.11r shortcuts the full RADIUS exchange. If CoA needs to reach the client’s current AP, and 802.11r means the AP might not have a full authentication session, the CoA has nowhere to land. In Meraki deployments, CoA and 802.11r are mutually exclusive. Choose CoA.
The tradeoff is slightly slower roaming. To mitigate this, enable ISE session resume (also known as fast reconnect). With session resume, a roaming client presents a session ticket to the new AP, which validates it with ISE without going through the full EAP exchange. It’s not as fast as 802.11r, but it brings roam times down from seconds to hundreds of milliseconds, which is acceptable for voice and video.
You should still enable 802.11k and 802.11v. 802.11k provides the client with a neighbor list so it knows which APs to consider for roaming. 802.11v allows the AP to suggest a better AP to the client via BSS Transition Management frames. Together, they make roams smarter without conflicting with CoA.
The ISE setup requires adding every AP as a network device. Since APs typically get addresses via DHCP, add the entire AP management subnet (VLAN 300) as a NAD entry rather than individual IPs. The RADIUS shared secret and the authorization profile, which maps to the meraki_ap interface template, complete the loop.
WPA3: The 6 GHz Mandate
WPA3 isn’t optional. It’s required. The moment you decide to broadcast an SSID on 6 GHz, WPA3 becomes mandatory. The Wi-Fi Alliance designed it that way. 6 GHz is a clean-slate spectrum, and they used the opportunity to enforce modern security from day one.
For the corporate SSID, deploy WPA3-Enterprise Only with Protected Management Frames (PMF) set to mandatory. PMF protects management frames like deauthentication, disassociation, and action frames from being spoofed, which mitigates a class of attacks that WPA2 left wide open. With PMF mandatory, every client connecting to the corporate SSID must support it. In 2026, this is a reasonable requirement; run compatibility testing against your managed fleet and move any holdouts (usually legacy printers) to a separate wired VLAN.
For the guest SSID, use a split approach. On 2.4 GHz and 5 GHz, the SSID operates with open authentication and Centralized Web Authentication (CWA) through ISE. Guests connect, get redirected to a captive portal, authenticate, and then ISE sends a CoA to lift the walled garden. On 6 GHz, open networks aren’t permitted, so you deploy OWE (Opportunistic Wireless Encryption) instead, which provides encryption without requiring a password. The 6 GHz-capable guest devices get encrypted airtime automatically; legacy guests on lower bands get the traditional open+CWA flow.
One thing to watch: WPA3-Enterprise uses SHA-256 for the Authentication and Key Management (AKM) suite, which means your RADIUS server must support the corresponding TLS ciphers. Validate ISE’s cipher configuration against the documented requirements before going live. A mismatch here means the EAP handshake silently fails and clients get bounced.
RF Design: Mirroring What Works
Don’t try to reinvent the RF wheel. If you have an existing Catalyst-managed infrastructure with a well-tuned AI-assisted RF profile, mirror those settings as closely as possible for the Meraki deployment. The goal is consistency: an engineer walking between an old-infrastructure floor and a new-infrastructure floor shouldn’t notice a difference in coverage or performance.
Here’s a reference RF profile:
| Band | Channel Width | Min Tx Power | Max Tx Power | RX-SOP | Min Bitrate | Channels |
|---|---|---|---|---|---|---|
| 2.4 GHz | 20 MHz | 2 dBm | 12 dBm | -80 dBm | 24 Mbps | 1, 6, 11 |
| 5 GHz | 40 MHz | 6 dBm | 15 dBm | -80 dBm | 24 Mbps | 36 – 165 |
| 6 GHz | 40 MHz | 6 dBm | 30 dBm | -80 dBm | 24 Mbps | All country channels |
A few things to call out:
RX-SOP (Receive Start of Packet) at -80 dBm. This tells the AP to ignore any frame received weaker than -80 dBm. It’s a noise floor tuning knob. In a dense office with dozens of APs, you don’t want an AP trying to decode a weak frame from three floors away. Setting this consistently across all bands keeps cell sizes predictable.
Minimum bitrate at 24 Mbps. This disables the legacy low-rate frames (1, 2, 5.5, 11 Mbps on 2.4 GHz and 6, 9, 12, 18 Mbps on 5 GHz). Low bitrate frames take longer to transmit and consume more airtime, penalizing everyone. By forcing a 24 Mbps floor, you keep airtime utilization reasonable and push low-signal clients to roam instead of clinging to a distant AP.
40 MHz on 5 and 6 GHz. Many existing deployments use 20 MHz on 5 GHz. Pushing to 40 MHz in both bands doubles throughput per channel. On 6 GHz, with its wide spectrum allocation, 40 MHz is conservative and you could go to 80 or even 160 MHz. Start at 40 MHz for consistency and widen later if density demands it.
No PSC (Preferred Scanning Channel) enforcement on 6 GHz. The Meraki dashboard doesn’t currently support restricting to PSC channels. This means APs will use any available 6 GHz channel, not just the PSC subset that clients scan first. In practice, clients discover non-PSC APs through reduced neighbor reports (RNR) in beacon frames from 5 GHz, so the impact is minimal, but it’s a divergence worth noting.
Enable AI-driven channel planning with a “minimize changes during busy hours” policy. The system observes the RF environment, detects co-channel interference, and moves APs to better channels, but only outside peak usage hours. This prevents the mid-meeting channel change that drops a video call.
QoS: The Ceiling That Wasn’t
QoS in enterprise Wi-Fi is about two things: marking traffic correctly and preventing abuse. On the wired side, you define per-queue scheduling, policing, and shaping. On the wireless side, the AP controls EDCA (Enhanced Distributed Channel Access) parameters and can mark or remark DSCP values.
For the guest SSID, implement a straightforward policy: remark all traffic to DSCP 0 (Best Effort). Guests don’t get priority. This is enforced through a traffic shaping rule that matches the entire guest subnet (VLAN 200) and forces DSCP 0. Simple, effective.
For the corporate SSID, ideally you’d want a QoS ceiling to cap all traffic at a certain DSCP value so that a misconfigured client can’t send packets marked as EF (Expedited Forwarding) and jump the queue. This is where you hit a Meraki limitation. The APs don’t support a true per-SSID QoS ceiling. A client can send traffic with any DSCP marking, and the AP will pass it through to the wired network unchanged.
The workaround is incomplete but practical: trust the DSCP from the AP on the switch port (trust dscp), create a video/voice-specific traffic shaping rule to remark collaboration voice traffic to EF (DSCP 46), and accept that a misbehaving client could technically abuse DSCP markings. In a corporate environment with managed endpoints, this is a calculated risk rather than a gap.
The bigger picture: the AP’s EDCA profile was aligned with 802.11-2016 recommendations, which gives voice traffic priority access to the wireless medium. Even if the DSCP marking is off, the EDCA parameters ensure that voice frames get prioritized over bulk data in the over-the-air contention. The wired QoS policy then takes over once the traffic hits the switch.
Adaptive Policy: SGTs from Air to Wire
Scalable Group Tags (SGTs) allow you to enforce access policy based on identity rather than IP address. Every client gets an SGT during authentication. ISE assigns it based on the user’s role, device posture, and group membership. The AP then tags every frame from that client with the SGT before sending it out the switchport.
For this to work, the switch port must trust the SGT from the AP. Configure CTS (Cisco TrustSec) on the edge switch to treat SGT 2 (the infrastructure SGT assigned to AP management traffic) as trusted, and enable role-based enforcement across all VLANs:
cts role-based enforcement
cts role-based enforcement vlan-list 1-4094
Each SSID was configured with Adaptive Policy Group = 0:Unknown as the default. The actual SGT assignment happens dynamically via RADIUS. ISE returns the SGT in the authorization response, and the AP applies it. This keeps the wireless config simple and pushes all the policy complexity into ISE, where it belongs.
One gotcha to watch for: the APs tag their own cloud-bound management traffic with SGT 2 by default. Make sure the infrastructure SGT is set to 2 in the Meraki dashboard to match, and that the switch trusts SGT 2 on the AP trunk port. A mismatch here silently drops AP management traffic, and the AP goes offline.
Rogue Detection and Air Marshal
In any campus with hundreds of APs, rogue detection isn’t optional. Configure alert rules for “honeypot” attacks, meaning APs that broadcast your SSID names but aren’t part of your infrastructure. These are the dangerous ones: an attacker sets up an evil twin SSID, lures clients to connect, and intercepts traffic.
The Meraki dashboard’s Air Marshal feature continuously scans for rogues. Add SSID-specific alerting rules: if any non-infrastructure AP broadcasts your corporate or guest SSID name, generate an immediate alert. The alert goes to the ops team, SNMP traps go to the monitoring system, and syslog events go to Splunk for correlation.
Also configure general wireless health alerts: if more than 30% of APs on a floor go offline, trigger a critical incident. If clients report poor signal strength on any SSID for more than 15 minutes, alert. If a client fails to connect and smart thresholds detect an anomaly, alert. The goal is to detect problems before users start calling the help desk.
The Edge Switch: More Important Than You Think
The edge switch configuration is often an afterthought in wireless design. In a local-switching deployment, it’s foundational. Every design decision at the wireless layer has a corresponding switch requirement:
LLDP. Meraki APs display their MAC address in CDP neighbor details, but they send their configured AP name via LLDP. Enabling lldp run on the edge switch gives the ops team meaningful neighbor information. You see a human-readable AP name instead of a raw MAC address when troubleshooting.
Interface templates. The AP authenticates with MAB on an access port. ISE returns an authorization profile that references the meraki_ap template. The switch applies the template, converting the port to a trunk with the correct native VLAN and allowed VLAN list. This happens automatically on every AP boot.
Sticky timer. There’s a race condition: when the AP port bounces (due to a reset, firmware upgrade, or flap), the switch restarts the authentication process. Without a sticky timer, the port could get stuck in an authentication loop. The AP tries to come up, the port bounces during template application, the bounce triggers re-authentication, and the cycle repeats. Setting access-session interface-template sticky timer 180 gives the template 180 seconds to survive a port bounce without restarting authentication.
DSCP trust. The switch must trust the DSCP markings from the AP, or all our QoS design is irrelevant. One trust dscp command on the interface ensures that voice packets marked EF by the AP stay EF through the wired network.
Monitoring and Observability
A wireless network you can’t monitor is a wireless network you can’t troubleshoot. Build the observability stack in parallel with the RF design:
SNMPv3. Every network should be configured with SNMPv3 users for monitoring platform discovery and for polling. Store the credentials in a secrets vault, not in spreadsheets. SNMP traps go to the monitoring platform for AP-down events, rogue detection, and wireless health alerts.
Syslog. All APs send syslog to regional log aggregation instances. Distribute syslog feeds regionally. APs in each region should send to the nearest Splunk and monitoring endpoints. This data feeds dashboards for client connectivity trends, authentication failures, and RF anomalies.
Webhooks. The Meraki dashboard supports webhooks for real-time event streaming. Configure webhooks to push events to your observability pipeline. Every AP state change, every rogue detection event, every client authentication event flows into your SIEM where it can be correlated with wired network events, identity system logs, and application metrics.
Floor-level composites. Group APs by floor into composite monitoring objects. Each floor’s APs are a cluster, and all floor clusters roll up into a site-level composite. This hierarchy lets the ops team see “5th floor has degraded coverage” at a glance instead of sifting through hundreds of individual AP alerts.
Firmware and Lifecycle
Meraki APs receive firmware from the cloud platform. Configure the upgrade strategy to “minimize client downtime.” The platform stages firmware to APs, then reboots them in a rolling fashion, ensuring that neighboring APs stay up while each AP restarts. On a multi-floor building, a firmware rollout takes hours but is invisible to users.
Meraki also simplifies device ordering with a single SKU model. The APs ship with one SKU regardless of regulatory domain. The cloud platform detects the AP’s geographic location via Geo-IP and applies the correct regulatory domain automatically. Deploy the same SKU to any country. FCC rules in the US, ETSI in Europe, all applied automatically. One SKU, one order process, any country.
What We’d Do Differently
Every deployment teaches you something. Here’s what we’d change:
Test the interface template flow early. The MAB-to-trunk conversion via ISE interface templates is powerful but fragile. The sticky timer issue, the multi-host requirement, the template naming consistency between ISE and the switch. These are details that only surface under real conditions. Many teams have burned weeks debugging an authentication loop that came down to a missing sticky timer.
Push for PSC channel enforcement. Not having PSC channel support on 6 GHz is a minor issue today but will matter as more 6 GHz clients appear. PSC channels are the ones that 6 GHz-only clients scan first during passive scanning. Without PSC enforcement, discovery relies on RNR elements in 5 GHz beacons, which means 6 GHz-only devices (which will exist soon) have a harder time finding the network.
Start with the monitoring stack. Don’t build monitoring after deployment. The first weeks of production shouldn’t be partially blind with APs online and working but no syslog, SNMP, or webhook pipelines in place. Build the observability layer first, deploy APs into an already-monitored environment.
Document the wired dependencies upfront. Wireless engineers design the wireless. Switch engineers configure the switches. The gap between them, things like trunk VLANs, interface templates, and CTS configuration, often lives in emails and meetings. The wireless design document should have a dedicated section that the switch team can take directly to their change window.
The Bigger Picture
Enterprise Wi-Fi design isn’t about picking the best access point. It’s about understanding how the wireless layer integrates with the wired infrastructure, the authentication system, the policy engine, the monitoring stack, and the operational workflows that keep it all running.
Meraki cloud-managed APs simplify the control plane. No on-premises controllers to patch, no HA pairs to manage, no CAPWAP tunnels to troubleshoot. But they shift complexity to the edge. The switch port config is more involved. The ISE integration is more distributed. The RF design must account for local switching behavior. The monitoring must cover hundreds of independent NADs instead of two controllers.
When it all comes together, two SSIDs, WPA3 on corporate, OWE on guest, 6 GHz coverage on every floor, and a monitoring stack that alerts you before users notice problems, the result is a wireless network that operates like infrastructure should: invisibly. The design document that started it all should be version-controlled in Git, because enterprise Wi-Fi is never finished. It’s just the current version.