Emmanuel Chalokwu | Security Engineer

EMMANUEL CHALOKWU
Security Engineering

Security Tool
Engineering Playbook

A comprehensive engineering standard defining how cloud security tools are configured, optimized, and maintained to produce reliable, high-fidelity security signal at scale.

Domain Cloud Security Posture
Scope AWS · Azure · Hybrid
Tools 3 Platforms Documented
Type Living Engineering Standard
Scroll

Engineering Intent Over Default Behavior

This playbook documents an operating model where security tooling is treated as an engineered system rather than a collection of vendor defaults. Every configuration choice, scope boundary, and tuning decision is intentional and documented.

The core thesis: a smaller set of reliable, high-confidence findings is dramatically more valuable than a high volume of unactionable alerts. Noise is not an inconvenience, it's an engineering failure.

The work draws a clear separation between Security Engineering (who designs, configures, and maintains the tools) and monitoring teams (who consume the output), ensuring each group can operate with confidence in their domain.

☁️
AWS Config + Security Hub
Cloud configuration & posture visibility
🔍
Rapid7 InsightVM
Vulnerability assessment & prioritization
🛡️
Microsoft Defender for Cloud
Azure posture & workload protection

Five Core Principles

These principles guide every configuration decision, exclusion choice, and tuning effort documented in the playbook.

01
Signal Over Noise

High-confidence, high-impact findings are prioritized. Duplicate, informational, or low-value alerts are engineered out at the source, not filtered downstream.

02
Consistency Over Coverage

Complete coverage ≠ effective coverage. Consistent severity mapping, naming standards, and repeatable outputs matter more than maximizing rule counts.

03
Clear Ownership Boundaries

Engineering owns tool design, scope, and quality. Monitoring owns review and escalation. This separation prevents misconfiguration from becoming a monitoring burden.

04
Tool Health = Security Control

A misconfigured or silently failing tool creates false assurance. Validating scan success, coverage, and ingestion health is a first-class engineering responsibility.

05
Continuous Optimization

Tooling evolves with the environment. Changes in cloud services, threat patterns, and vendor behavior require ongoing engineering review, not a one-time setup.

"Default configurations are evaluated, not blindly accepted."

Every tool in scope is deliberately configured — inclusion, exclusion, and severity are all engineering decisions.

Per-Platform Standards

☁️

AWS Config + Security Hub

Cloud configuration monitoring · CSPM · Finding aggregation

Baseline Enablement
  • Enabled only in regions hosting governed workloads
  • Scoped to resource types relevant to the environment
  • Security Hub as the single aggregation layer (CSPM)
  • AWS Foundational Security Best Practices enabled
  • Selected CIS AWS Foundations controls (high signal only)
Intentional Exclusions
  • High-volume informational findings with no action path
  • Controls overlapping with higher-fidelity detection elsewhere
  • Rules enforcing practices mismatched to the architecture
  • Legacy service checks for services not in use
  • Not used as a remediation engine or workflow system
Common Noise Patterns Addressed
  • Informational "best practice" findings with no clear action
  • Repetitive findings for the same systemic misconfiguration
  • Findings triggered by intentional architectural choices
  • Duplicate findings across multiple AWS services
Output Quality Standard
  • Must clearly identify the affected resource
  • Consistent, meaningful severity mapping
  • Sufficient context to understand the risk
  • Stable over time — no unnecessary churn
  • Non-compliant findings are candidates for tuning or exclusion
🔍

Rapid7 InsightVM

Vulnerability assessment · Credentialed scanning · Risk prioritization

Scope & Cadence
  • Only assets with clear ownership and realistic remediation path in scope
  • Ephemeral, unmanaged, or uncontrolled assets excluded by default
  • Scan schedule balanced for timeliness vs. operational stability
  • Coverage health monitored for unexpected gaps or asset drift
Credentialed Scanning Strategy
  • Credentialed scanning is the preferred model for all managed assets
  • Deeper visibility, fewer false positives vs. surface-level inspection
  • Non-credentialed used selectively with documented limitations
  • Credential management is part of tooling design — not optional
Prioritization Approach
  • CVSS used as context, not as sole authority
  • Exploitability, exposure, and asset importance factor into risk
  • Persistent findings without credible attack scenarios deprioritized
  • High-impact and exposed assets always emphasized
Managing Low-Value Signal
  • Vulnerabilities without realistic exploitation paths tuned out
  • Findings on non-critical or deprecated assets deprioritized
  • Engineering actively reviews recurring noise patterns
  • Output quality treated as an ongoing engineering responsibility
🛡️

Microsoft Defender for Cloud

Azure CSPM · Workload protection · Posture recommendations

Baseline Policy Scope
  • Core cloud security hygiene and identity/access controls
  • Network exposure controls and workload-level posture
  • Only actionable, repeatable signal enabled by default
  • Practices unrelated to deployed architecture excluded
Secure Score Philosophy
  • Treated as a directional indicator, not a performance metric
  • Engineering focuses on underlying recommendations, not the score
  • Cosmetic score improvements without risk reduction deprioritized
  • Score change analyzed for meaningful exposure shifts
Configuration Rationale
  • Driven by engineering judgment, not default vendor enablement
  • Overlap with higher-fidelity controls avoided intentionally
  • Each recommendation contributes unique, additive signal
  • Architectural patterns and workload criticality inform decisions
Noise Reduction
  • Generic or informational recommendations reviewed for exclusion
  • Idealized configurations vs. genuine posture gaps distinguished
  • Improves trust so monitoring teams focus on real gaps
  • Outputs that fluctuate without env. change are investigated

Overlap Handling Examples

When multiple tools surface the same condition, one is designated as the authoritative source — preventing duplicated signal and conflicting narratives.

Example 01
Unencrypted S3 Storage

An S3 bucket without encryption detected by both AWS Security Hub and Rapid7 InsightVM.

Authoritative Source ☁️ AWS Security Hub

Fundamentally a configuration issue, not an exploitable vulnerability. Security Hub provides native AWS context and directly tracks the configuration state. InsightVM findings are deprioritized to prevent duplicate alerting.

Example 02
Missing OS Patches on EC2

Outdated OS packages detected by both AWS Security Hub (via Systems Manager) and Rapid7 InsightVM.

Authoritative Source 🔍 Rapid7 InsightVM

InsightVM is purpose-built for vulnerability assessment, providing CVSS scores, exploit availability, and detailed patch context. Security Hub patch findings are monitored for coverage validation only.

Example 03
Public Azure VM Exposure

An Azure VM with a public IP and permissive NSG rules flagged by both Defender for Cloud and Rapid7 InsightVM.

Authoritative Source 🛡️ Defender for Cloud

A network configuration issue best represented by the cloud-native tool that understands NSG context, virtual network topology, and Azure-specific controls. InsightVM informs scan prioritization only.

Example 04
Weak TLS Configuration

A web application using deprecated TLS 1.0 detected by both Rapid7 InsightVM and Microsoft Defender for Cloud.

Authoritative Source 🔍 Rapid7 InsightVM

InsightVM performs active testing with cipher suite analysis, protocol version testing, and exploitability context. Defender TLS recommendations are cross-referenced for coverage validation only.

Health Validation as a Security Control

A silently failing tool creates false assurance. These per-platform checks are first-class engineering obligations, not background maintenance tasks.

AWS Security Hub
Recent evaluation timestamps on all enabled standards
Finding counts aligned with expected resource inventory
Cross-region aggregation functioning if enabled
No "No data" status on active controls
Custom integrations updating on schedule
Rapid7 InsightVM
Scheduled scans completing successfully and on time
Scan duration trends reviewed for performance drift
New infrastructure captured within expected timeframes
No auth failures indicating credential rotation issues
Vulnerability data freshness aligned with scan cadence
Microsoft Defender for Cloud
Secure Score shows a recent calculation timestamp
All subscriptions in inventory with active assessment
Recommendations evaluated — no stale "last assessed" dates
No missing or inactive policies that should be enabled
Log Analytics workspace connectivity maintained

Documented Decision Rationale

Configuration choices are explicitly documented to remain understandable, defensible, and repeatable over time.

Decision 01 · AWS Config
Selective Rule Enablement Over Exhaustive Coverage
Problem

Enabling all available rules generated a high volume of findings — many informational, expected by design, or irrelevant to actual risk. Meaningful misconfigurations were indistinguishable from noise.

Decision

Only rules detecting high-impact risks (public exposure, missing encryption, weakened identity controls) were enabled. Idealized best-practice rules without a clear security impact were excluded.

Outcome

Finding volume significantly reduced while relevance increased. Security Hub outputs became easier to interpret, reflecting real security risk rather than theoretical compliance gaps.

Decision 02 · Rapid7 InsightVM
Credentialed Scanning as the Organizational Default
Problem

Non-credentialed scans produced incomplete results. Important system-level vulnerabilities were missed, and findings lacked the accuracy needed for confident prioritization.

Decision

Credentialed scanning established as the default for all managed assets. Non-credentialed limited to environments where it's not feasible, with limitations explicitly documented.

Outcome

Detection accuracy improved, false positives reduced, and scan results stabilized across cycles. Vulnerability data could be used directly for prioritization without manual validation.

Decision 03 · Microsoft Defender for Cloud
Deprioritizing Secure Score Optimization
Problem

Some recommendations significantly increased the Secure Score while having little effect on actual exposure, encouraging teams to optimize for metrics rather than real risk reduction.

Decision

Engineering effort directed at recommendations that reduce exposure, address systemic weaknesses, or improve control effectiveness — even when they have limited score impact.

Outcome

Effort aligned with genuine risk reduction. Secure Score is now used to observe posture trends over time rather than as a performance target to chase.