SASE Proof of Concept (PoC) Guide
Run a 30-45 day SASE PoC with 50-100 users across 2-3 use cases: SWG web filtering, ZTNA app access, and CASB shadow IT discovery. Define pass/fail criteria before starting — latency thresholds, detection rates, and admin workflow time. The biggest PoC trap: vendors cherry-pick the easiest test cases. Insist on testing TLS inspection with your actual bypass list.
A SASE proof of concept (PoC) is a time-boxed, controlled evaluation of one or more SASE/SSE platforms against your specific environment, applications, and use cases. The purpose is to validate that the platform works in your network, with your identity provider, against your applications, and for your users — because vendor demos on pristine lab environments prove nothing about real-world compatibility. A well-structured PoC takes 4 to 6 weeks, tests 8 to 12 specific scenarios, and produces a quantitative scorecard that directly informs your purchasing decision. A poorly structured PoC takes 12+ weeks, tests whatever the vendor wants to show, and produces subjective opinions that do not survive procurement committee scrutiny.
PoC planning: before you touch technology
Define success criteria upfront
Before inviting any vendor into your environment, define exactly what success looks like. Write down 8 to 12 specific, measurable test scenarios that map to your actual deployment requirements. Each scenario should have a pass/fail criterion that is not subject to interpretation. For example: 'ZTNA must provide access to our internal Jira instance (HTTPS) with sub-second connection time when authenticated through Okta with MFA, on a Windows 11 device with CrowdStrike EDR running' is a testable scenario. 'ZTNA works well for internal applications' is not. Document these scenarios in a shared scorecard before any vendor engagement.
Success criteria should cover five categories: functionality (does the feature work as advertised?), performance (what is the latency impact? what is the connection time?), compatibility (does it work with our IdP, our EDR, our OS mix, our applications?), operations (how hard is the setup? how clear is the logging? how useful are the dashboards?), and integration (does the API work? can we automate deployment? does it feed our SIEM?). Weight these categories by importance to your organization — an organization with 50% macOS endpoints should weight compatibility higher than one with 95% Windows.
Select test applications carefully
Choose 5 to 8 applications for the PoC that represent your real deployment challenges. Include at least one of each: a standard internal web application (your intranet or wiki), a business-critical SaaS application (Microsoft 365, Salesforce), an application that uses certificate pinning (banking portal, government site), a thick-client application accessed via VPN today (ERP client, legacy database tool), an application sensitive to latency (video conferencing, real-time collaboration), and an application that handles regulated data (HR system, patient records). These applications will expose compatibility issues, performance impacts, and TLS inspection challenges that a demo environment never reveals.
Assemble the right evaluation team
The PoC evaluation team must include representatives from security (policy configuration and threat testing), networking (traffic steering and performance measurement), endpoint (agent deployment and compatibility), identity (IdP integration and conditional access), and at least two non-technical pilot users who will test real workflows and report on user experience. Without the endpoint and identity team members, you will miss compatibility issues. Without the pilot users, you will miss user experience problems. Assign a PoC lead who owns the timeline, coordinates vendor access, and is responsible for producing the final scorecard.
PoC evaluation scorecard template
| Test Scenario | Weight | Score (1-10) | Pass/Fail Criteria |
|---|---|---|---|
| SWG: TLS inspection with certificate deployment | 10% | — | All test sites load without certificate errors on Windows, macOS, iOS. Bypass list handles pinned apps. |
| SWG: Malware detection rate | 10% | — | Detect 100% of EICAR test files, 90%+ of test samples from MalwareBazaar delivered via HTTPS. |
| ZTNA: Internal web app access | 10% | — | Sub-second connection to 3 internal web apps after Okta MFA. Posture checks pass on test devices. |
| ZTNA: Thick-client app access | 10% | — | RDP and SSH sessions stable for 30+ minutes. Proprietary ERP client functional through ZTNA tunnel. |
| CASB: Shadow IT discovery | 8% | — | Discover and categorize 90%+ of SaaS apps detected in 2 weeks of traffic monitoring. |
| CASB: Inline SaaS controls | 8% | — | Block upload to personal Dropbox while allowing corporate Dropbox. Enforce tenant restrictions on M365. |
| DLP: Sensitive data detection | 8% | — | Detect SSN, credit card, and custom regex patterns in file uploads, email bodies, and SaaS form fields. |
| Performance: Latency impact | 10% | — | Less than 10ms added latency for SWG inspection. Less than 5ms for ZTNA connection after initial auth. |
| Agent: Deployment and compatibility | 8% | — | Agent installs silently via Intune. No conflicts with CrowdStrike, Zscaler client (if testing), or VPN client. |
| Agent: Stability over 2 weeks | 8% | — | Zero agent crashes. Zero routing table corruption. Zero DNS resolution failures on pilot devices. |
| Operations: Logging and dashboards | 5% | — | All blocked events visible in dashboard within 60 seconds. Log export to Splunk functional via syslog or API. |
| Integration: API and automation | 5% | — | Terraform provider or REST API can create a ZTNA policy and deploy an application connector programmatically. |
Executing the PoC
Week 1: Infrastructure setup
Deploy the vendor's cloud tenant or management console. Configure identity provider integration (SAML/OIDC with your production IdP — do not use a test IdP because you need to validate against your actual user directory and MFA configuration). Deploy ZTNA connectors in the network segments hosting your test applications. Deploy the endpoint agent on 10 to 20 pilot devices representing your OS mix (Windows, macOS, and mobile if applicable). Deploy root CA certificates for TLS inspection. Verify basic connectivity: pilot users can access the internet through the SWG and internal applications through ZTNA.
Week 2: Functional testing
Execute each test scenario from your scorecard systematically. Document results with screenshots, latency measurements, and pass/fail determinations. When a test fails, investigate whether it is a configuration issue (fixable) or an architectural limitation (not fixable). Give the vendor SE reasonable time to troubleshoot — 24 to 48 hours per issue — but do not extend the PoC timeline indefinitely. If a critical scenario fails and the vendor cannot resolve it within the PoC window, it is a legitimate finding that belongs on the scorecard.
Weeks 3-4: Real-world usage
Expand to 30 to 50 pilot users who use the platform for their actual daily work. This phase surfaces issues that controlled testing misses: applications that were not in your test list but break under TLS inspection, edge cases in IdP integration (expired certificates, conditional access conflicts), agent stability over days of continuous use, and user experience feedback from people who are not network engineers. Collect structured feedback from pilot users at the end of each week using a standardized survey: rate your access experience (1-5), rate your browsing speed (1-5), did anything break this week (yes/no with details).
Weeks 5-6: Analysis and scoring
Complete the scorecard with weighted scores for each test scenario. Calculate the total weighted score. If you ran parallel PoCs with multiple vendors, compare scorecards side by side. Present findings to the procurement committee with specific, evidence-based recommendations. The scorecard should make the decision obvious — if it does not, your test scenarios were not specific enough or your weights were not differentiated enough.
Common PoC traps to avoid
Trap 2: Running the PoC too long. A PoC that stretches beyond 6 weeks loses momentum, pilot users disengage, and the vendor rotates SEs. Time-box aggressively. If the platform cannot be set up and tested in 6 weeks, that itself is a data point about deployment complexity.
Trap 3: Not testing failure scenarios. What happens when the ZTNA broker goes down? How does the agent behave when it loses connectivity to the cloud? What is the failover time? Test resilience explicitly — vendors will not volunteer to demonstrate failure modes, but these scenarios determine your production reliability.
Trap 4: Evaluating based on features you will not deploy in year one. If you are deploying SWG and ZTNA first, weight those heavily. Do not give equal weight to CASB, DLP, and SD-WAN features that you will not touch for 12 to 18 months. Platforms evolve — the vendor with the best CASB today may not have the best CASB when you deploy it next year. Score what matters now.
Trap 5: Not including pricing in the PoC evaluation. Request detailed pricing — per-user, per-site, per-feature — during the PoC, not after. A platform that scores 9/10 on functionality but costs 2x your budget is not the winner. Include a cost-per-point analysis in your scorecard to normalize capability against price.
Translating PoC results into a decision
The PoC scorecard is one input to the vendor decision, not the only input. Complement it with reference checks (call 3 to 5 customers in similar industries with similar scale), Gartner Peer Insights reviews (filter by your industry and company size), financial viability assessment (is the vendor profitable? are they acquisition targets?), and roadmap credibility (do their promised features have history of shipping on time?). The PoC validates technical fit. These additional factors validate strategic fit. Both matter for a 3 to 5 year platform commitment.
Sources & further reading
- Gartner, "How to Run a Successful Technology Proof of Concept" — gartner.com/smarterwithgartner/proof-of-concept-best-practices
- Gartner, "Magic Quadrant for Security Service Edge" — gartner.com/reviews/market/security-service-edge
- Cisco, "SASE Proof of Value Guide" — cisco.com/c/en/us/solutions/enterprise-networks/sase
- Palo Alto Networks, "Prisma SASE Evaluation Guide" — paloaltonetworks.com/sase/evaluate
- Dark Reading, "How to Evaluate SASE Vendors" — darkreading.com/cloud-security
Frequently asked questions
Related on sase.cloud
How to build managed SASE services: multi-tenant architecture, vendor MSP readiness, per-tenant isolation, licensing, an...
Phase-by-phase guide to migrating from MPLS to SD-WAN: circuit planning, overlay deployment, application-aware routing, ...
How TLS inspection works in SASE: decryption mechanics, why certificate-pinned apps break, bypass list strategies, and p...
One email per publish. Unsubscribe anytime.