Kubernetes security best practices start with an uncomfortable truth: Kubernetes is not secure by default. A default installation allows pods to run as root, has no network segmentation between pods, stores secrets base64-encoded (not encrypted), and grants broad default service account permissions. Every cluster you deploy starts in a state that would fail a basic security audit. Kubernetes Security docs
The gap between a working cluster and a hardened cluster is not dramatic, it is a specific set of configuration decisions applied systematically. The CIS Kubernetes Benchmark contains 200+ recommendations. The highest-impact priorities are RBAC enforcement, Pod Security Standards, network policies, etcd encryption, and audit logging. This guide covers those priorities with the actual YAML and commands you apply in production, organized by the layer they protect: control plane, workload configuration, network segmentation, image security, runtime detection, and continuous compliance scanning. CIS Kubernetes Benchmark
The Kubernetes Attack Surface
Understanding which components are exposed is the prerequisite for applying kubernetes security best practices intelligently.
The API server is the front door. Every interaction with the cluster goes through it. A misconfigured API server with anonymous authentication enabled or overly broad RBAC rules gives attackers direct control. etcd stores all cluster state, including secrets, direct access to etcd means access to every secret in the cluster. It must be encrypted at rest and accessible only to the API server. Kubelet runs on every node and executes pod workloads. Pod Security Admission docs
At the workload layer, the threat model has three primary surfaces:
Privileged pods. A container running as root with excessive Linux capabilities can break out of the container namespace and compromise the node. Privileged pods, host-network pods, and pods with hostPID: true or hostIPC: true can access host-level resources directly.
Overly permissive RBAC. Overly permissive roles allow lateral movement and privilege escalation once an attacker gains any foothold. A compromised pod with a service account bound to a ClusterRole that can read secrets across the cluster can exfiltrate every credential in the cluster. Pod Security Standards
Supply chain. Malicious or vulnerable container images introduce attacker-controlled code. Without image scanning and admission policies that enforce image integrity, any image from any registry can run in your cluster.
Kubernetes Security Best Practices: RBAC
RBAC is Kubernetes’ primary authorization mechanism. Most clusters fail at RBAC not because it is misconfigured, but because it is over-permissive by default and nobody reviews it systematically. Falco documentation
The four RBAC principles:
Least privilege. Every service account, user, and group should have exactly the permissions required for their function, no more. If a workload only needs to read ConfigMaps in its own namespace, it should not have permission to read secrets or list pods cluster-wide.
Namespace-scoped over cluster-scoped. Use Role and RoleBinding (namespace-scoped) wherever possible. Use ClusterRole and ClusterRoleBinding only when cross-namespace access is genuinely required. Most application workloads never need cluster-wide permissions.
Never use the default service account. The default service account in every namespace automatically mounts a token that has varying permissions depending on your cluster configuration. Create a dedicated service account for each workload.
Never bind to cluster-admin for convenience. Almost no workload needs cluster-admin. Operators, CI/CD pipelines, and monitoring tools that run with cluster-admin are the most common source of privilege escalation in compromised clusters. Kubernetes RBAC docs
Correctly scoped RBAC for a typical application:
# 1. Dedicated service account - no auto-mounted token
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-api
namespace: production
automountServiceAccountToken: false # Explicitly disabled
---
# 2. Role with minimum required permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: payment-api
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
resourceNames: ["payment-api-config"] # Restrict to specific resource names
---
# 3. RoleBinding - namespace-scoped
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payment-api
namespace: production
subjects:
- kind: ServiceAccount
name: payment-api
namespace: production
roleRef:
kind: Role
name: payment-api
apiGroup: rbac.authorization.k8s.ioAuditing existing RBAC:
# Find all ClusterRoleBindings granting cluster-admin
kubectl get clusterrolebindings -o json | jq '
.items[] |
select(.roleRef.name == "cluster-admin") |
{name: .metadata.name, subjects: .subjects}'
# Check what permissions a specific service account has
kubectl auth can-i --list \
--as=system:serviceaccount:production:payment-api \
-n production
# Install kubectl-who-can for comprehensive RBAC audit
kubectl krew install who-can
kubectl who-can get secrets -n production
kubectl who-can create pods --all-namespacesRun the audit quarterly. RBAC configurations drift: permissions added for debugging are rarely removed, CRD installations add new ClusterRoles that may be broader than necessary, and service accounts accumulate permissions over time.
Kubernetes Security Best Practices: Pod Security Standards
PodSecurityPolicy was deprecated in Kubernetes 1.21 and removed in 1.25. Pod Security Standards (PSS) replace it, defining three profiles: Privileged (unrestricted), Baseline (blocks known privilege escalation), and Restricted (full hardening for application workloads). Kubernetes Network Policies
Pod Security Admission (PSA) enforces these profiles at the namespace level via labels. It operates in three modes: enforce (reject violating pods), audit (log violations, allow pods), and warn (warn the user, allow pods).
Production namespace configuration:
# Enforce the restricted profile in production
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latestStaging and dev – warn before enforce:
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
# Warn on violations, do not reject — allows testing before enforcing
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latestA pod manifest that passes the Restricted profile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-api
namespace: production
spec:
template:
spec:
serviceAccountName: payment-api
# Pod-level security context
securityContext:
runAsNonRoot: true
runAsUser: 10000
runAsGroup: 10000
fsGroup: 20000
seccompProfile:
type: RuntimeDefault # Required for Restricted profile
containers:
- name: payment-api
image: your-registry/payment-api:v2.4.1@sha256:abc123...
# Container-level security context
securityContext:
allowPrivilegeEscalation: false # Required for Restricted
readOnlyRootFilesystem: true # Prevents writes to container FS
capabilities:
drop:
- ALL # Drop all Linux capabilities
# add: # Only add back specific capabilities if needed
# - NET_BIND_SERVICE # Example: if binding to port < 1024The seccompProfile: RuntimeDefault is required for the Restricted profile in Kubernetes 1.25+. It applies the default seccomp profile from the container runtime, which blocks a significant number of dangerous syscalls while allowing everything a typical application needs.
Apply PSA to all existing namespaces in warn mode first:
# Apply warn labels to all namespaces to see violations without breaking anything
for ns in $(kubectl get ns -o name | cut -d/ -f2 | grep -v kube-system | grep -v argocd); do
kubectl label ns $ns \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest \
--overwrite
done
# Check for violations - any pod redeployment will show warnings
kubectl get events --all-namespaces | grep "violates PodSecurity"Start with the highest-impact, lowest-effort controls first: audit your RBAC, enable Pod Security Admission in warn mode on all namespaces, and deploy Trivy Operator. These three steps give you immediate visibility and prevent the most common privilege escalations without breaking anything. Kyverno documentation
Start with the highest-impact, lowest-effort controls first: audit your RBAC, enable Pod Security Admission in warn mode on all namespaces, and deploy Trivy Operator. These three steps give you immediate visibility and prevent the most common privilege escalations without breaking anything. Trivy documentation
Kubernetes Security Best Practices: Network Policies
Default Kubernetes networking is fully open, every pod can reach every other pod across every namespace. Implementing default-deny network policies in every namespace and adding explicit allow rules is essential for micro-segmentation. kube-bench repo
Step 1 – Default deny all ingress and egress:
# Apply this to every namespace immediately
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods in namespace
policyTypes:
- Ingress
- Egress
# No rules = deny allStep 2 – Allow DNS resolution (required after default-deny egress):
# Without this, all pods lose DNS and cannot resolve service names
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53If you deploy a default-deny egress policy without explicitly allowing DNS, pods lose the ability to resolve service names. Always add the DNS egress rule immediately after the default-deny policy. Kubescape documentation
Step 3 – Add specific allow rules per workload:
# Allow ingress to payment-api from frontend only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-api-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: payment-api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 3000
---
# Allow payment-api egress to database only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: payment-api-egress
namespace: production
spec:
podSelector:
matchLabels:
app: payment-api
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
# Plus DNS (covered by the allow-dns policy above)Validating network policies:
# Test connectivity between pods (should be blocked by default-deny)
kubectl run test-pod --image=curlimages/curl --rm -it --restart=Never \
-n production -- curl -v http://payment-api:3000/health
# Verify the specific policy is applied
kubectl get networkpolicies -n production
kubectl describe networkpolicy default-deny-all -n production
# Use netassert or cyclonus for comprehensive network policy testingNetwork policies are enforced by the CNI plugin, not Kubernetes itself. Verify that your CNI supports NetworkPolicy enforcement: Cilium, Calico, and Weave all do. Flannel does not enforce NetworkPolicy by default.
Kubernetes Security Best Practices: Image Security
Every container image is an attack surface. An image with a critical vulnerability in an unpatched base layer gives an attacker a vector that no amount of RBAC or network policy configuration can prevent once the process is running.
The four image security controls:
1. Scan images in CI before they reach the registry:
# .github/workflows/image-security.yml
name: Image Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t my-app:${{ github.sha }} .
- name: Scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: my-app:${{ github.sha }}
format: sarif
output: trivy-results.sarif
severity: CRITICAL,HIGH
exit-code: 1 # Fail the build on critical/high vulnerabilities
ignore-unfixed: true # Do not fail on vulnerabilities with no fix
- name: Upload scan results
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: trivy-results.sarif2. Use image digests instead of tags in production:
# Tags are mutable - the same tag can point to different images
# Digests are immutable - this exact image, always
image: your-registry/payment-api:v2.4.1@sha256:3a6f3d...
# Generate the digest after pushing:
docker push your-registry/payment-api:v2.4.1
docker inspect your-registry/payment-api:v2.4.1 --format='{{index .RepoDigests 0}}'3. Sign images with Sigstore/Cosign:
# Install cosign
brew install cosign
# Sign the image after pushing (uses keyless signing with OIDC)
cosign sign --yes your-registry/payment-api:v2.4.1@sha256:3a6f3d...
# Verify the signature
cosign verify \
--certificate-identity-regexp="https://github.com/your-org/.*" \
--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \
your-registry/payment-api:v2.4.1@sha256:3a6f3d...4. Enforce admission policies with Kyverno:
# Kyverno policy: require image digests on all production pods
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-image-digest
spec:
validationFailureAction: Enforce
rules:
- name: require-digest
match:
any:
- resources:
kinds: [Deployment, StatefulSet]
namespaces: [production]
validate:
message: "Production images must use SHA digest, not mutable tags"
pattern:
spec:
template:
spec:
containers:
- image: "*@sha256:*"
---
# Kyverno policy: require Trivy scan results annotation
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-image-scan
spec:
validationFailureAction: Enforce
rules:
- name: check-scan-annotation
match:
any:
- resources:
kinds: [Deployment]
namespaces: [production]
validate:
message: "Images must be scanned before production deployment"
pattern:
metadata:
annotations:
security.alpha.kubernetes.io/trivy-scan-date: "?*"5. Continuous in-cluster scanning with Trivy Operator:
# Install Trivy Operator - scans all workloads continuously
helm repo add aquasecurity https://aquasecurity.github.io/helm-charts/
helm install trivy-operator aquasecurity/trivy-operator \
--namespace trivy-system \
--create-namespace \
--set="trivy.ignoreUnfixed=true"
# Check vulnerability reports
kubectl get vulnerabilityreports --all-namespaces
kubectl get vulnerabilityreport -n production payment-api-xxxx -o yaml
# Check configuration audit reports
kubectl get configauditreports --all-namespacesTrivy Operator creates VulnerabilityReport, ConfigAuditReport, and RbacAssessmentReport custom resources in the same namespace as each workload. These integrate with Prometheus for alerting on new critical vulnerabilities discovered after deployment.
Kubernetes Security Best Practices: Runtime Security with Falco
Image scanning catches known vulnerabilities before deployment. Runtime security catches malicious behavior after deployment, when an attacker exploits an unknown vulnerability, a compromised dependency, or a misconfiguration that scanning did not catch.
Falco with eBPF is the production standard for Kubernetes runtime security in 2026. Falco monitors syscalls from the kernel and alerts when behavior deviates from expected patterns: a shell spawned inside a container, a container reading /etc/shadow, unexpected network connections from a pod that should only communicate with its database. Kubernetes secrets encryption
Install Falco with eBPF driver:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set driver.kind=ebpf \ # eBPF driver — no kernel module required
--set falcosidekick.enabled=true \
--set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/..." \
--set falcosidekick.config.slack.minimumpriority=warningCustom Falco rules for Kubernetes environments:
# /etc/falco/rules.d/custom-kubernetes.yaml
- rule: Shell Spawned in Container
desc: A shell was spawned in a container - likely interactive access
condition: >
spawned_process and
container and
not container.image.repository in (allowed_shell_containers) and
proc.name in (bash, sh, zsh, dash, fish)
output: >
Shell spawned in container
(user=%user.name user_loginname=%user.loginname
command=%proc.cmdline pid=%proc.pid
container_id=%container.id image=%container.image.repository:%container.image.tag
namespace=%k8s.ns.name pod=%k8s.pod.name)
priority: WARNING
tags: [container, shell, mitre_execution]
- rule: Unexpected Outbound Connection from Database Pod
desc: Database pod is making unexpected outbound connections
condition: >
outbound and
container and
k8s.pod.label.app = "postgres" and
not fd.sport in (5432)
output: >
Unexpected outbound connection from database pod
(command=%proc.cmdline connection=%fd.name
namespace=%k8s.ns.name pod=%k8s.pod.name)
priority: CRITICAL
- rule: Sensitive File Read in Container
desc: Sensitive file read inside a container
condition: >
open_read and
container and
fd.name in (/etc/shadow, /etc/sudoers, /root/.ssh/id_rsa, /root/.aws/credentials) and
not proc.name in (systemd, sshd)
output: >
Sensitive file read in container
(user=%user.name file=%fd.name
container_id=%container.id image=%container.image.repository
namespace=%k8s.ns.name pod=%k8s.pod.name)
priority: CRITICALKubernetes Security Best Practices: Secrets Management
etcd stores all cluster state, including secrets. Base64 encoding is not encryption, anyone with etcd access can read every secret in the cluster. Kubernetes audit logging
Enable etcd encryption at rest:
# /etc/kubernetes/encryption-config.yaml (control plane)
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback for reading unencrypted secrets during migration# Apply to kube-apiserver
# Add to kube-apiserver manifest or flags:
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml
# Verify secrets are encrypted in etcd
ETCDCTL_API=3 etcdctl get /registry/secrets/production/db-secret \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key | hexdump -C | head
# Output should show encrypted bytes, not readable JSONFor managed Kubernetes (EKS, GKE, AKS), enable envelope encryption using the cloud KMS (AWS KMS, Google Cloud KMS, Azure Key Vault) through the cluster configuration, the control plane is managed, but secrets encryption is your responsibility.
For workload secrets, see the GitOps Kubernetes guide for the three patterns (Sealed Secrets, External Secrets Operator, SOPS) that keep secrets out of Git and out of plain etcd storage simultaneously.
Kubernetes Security Best Practices: CIS Benchmark with kube-bench
Open-source tools like kube-bench audit your cluster against CIS Kubernetes Benchmarks. Running kube-bench after initial cluster setup and after major configuration changes is the kubernetes security best practice that catches control plane misconfigurations that workload-level tools miss. NSA-CISA Kubernetes Hardening Guide
# Run kube-bench as a Job
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl logs $(kubectl get pods -l app=kube-bench -o name) -n default
# Output shows PASS/FAIL/WARN per control with remediation:
# [PASS] 1.1.1 Ensure that the API server pod specification file ...
# [FAIL] 1.2.1 Ensure that the --anonymous-auth argument is set to false
# [WARN] 1.2.6 Ensure that the --kubelet-certificate-authority argument is setFor managed clusters (EKS, GKE, AKS):
# Use the managed cluster variant — skips control plane checks you cannot change
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-eks.yaml
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-gke.yaml
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-aks.yamlOn managed clusters, you cannot modify the control plane configuration, the cloud provider manages it. On managed Kubernetes (EKS, GKE, AKS), the cloud provider handles most control plane hardening, but you are still responsible for RBAC, pod security, and workload configuration. Kubernetes Security overview
Kubescape for continuous compliance:
# Install Kubescape
curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
# Scan against CIS Kubernetes Benchmark
kubescape scan framework cis-v1.23-t1.0.1 --enable-host-scan
# Scan against NSA-CISA hardening guidelines
kubescape scan framework nsa
# Scan against MITRE ATT&CK framework
kubescape scan framework mitre
# Continuous scanning in-cluster (Operator)
helm repo add kubescape https://kubescape.github.io/helm-charts/
helm install kubescape kubescape/kubescape-operator \
-n kubescape \
--create-namespace \
--set clusterName=$(kubectl config current-context)Kubernetes Security Best Practices: Audit Logging
Audit logs record every request made to the Kubernetes API server: who made it, what they requested, and what the server did. Without audit logging, you have no forensic capability when an incident occurs.
# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log secret access at Request level (captures what was read)
- level: Request
resources:
- group: ""
resources: [secrets]
verbs: [get, list, watch, create, update, patch, delete]
# Log RBAC changes at RequestResponse level
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: [roles, clusterroles, rolebindings, clusterrolebindings]
# Log pod exec and pod attach (common attacker technique)
- level: RequestResponse
resources:
- group: ""
resources: [pods/exec, pods/attach, pods/portforward]
# Log authentication failures
- level: Metadata
omitStages: [RequestReceived]
users: [system:anonymous]
# Default: log metadata only (captures who/what without full body)
- level: Metadata
omitStages: [RequestReceived]Ship audit logs to a centralized system outside the cluster (CloudWatch, Datadog, Elasticsearch). Audit logs stored only on the control plane node are at risk if the node is compromised.
The Kubernetes Security Best Practices Checklist
RBAC
[ ] No workload binds to cluster-admin
[ ] Every workload has a dedicated service account
[ ] automountServiceAccountToken: false on all service accounts
that do not call the Kubernetes API
[ ] RBAC audit run quarterly (kubectl-who-can or rbac-tool)
[ ] No wildcard verbs (*) on production ClusterRoles
POD SECURITY
[ ] Pod Security Admission enforcing restricted profile
on all production namespaces
[ ] runAsNonRoot: true on all containers
[ ] allowPrivilegeEscalation: false on all containers
[ ] capabilities.drop: [ALL] on all containers
[ ] readOnlyRootFilesystem: true where possible
[ ] seccompProfile: RuntimeDefault on all pods
NETWORK
[ ] Default-deny NetworkPolicy in every namespace
[ ] DNS egress explicitly allowed (UDP/TCP port 53 to kube-system)
[ ] All ingress and egress explicitly allowed by label selector
[ ] CNI confirmed to enforce NetworkPolicy (Cilium, Calico, Weave)
IMAGE SECURITY
[ ] Trivy scanning in CI pipeline — blocking on CRITICAL/HIGH
[ ] Image digests (sha256) used in production manifests, not tags
[ ] Cosign image signing on all production images
[ ] Kyverno policy enforcing digest requirement in production
[ ] Trivy Operator deployed for continuous in-cluster scanning
RUNTIME
[ ] Falco deployed with eBPF driver
[ ] Falco rules for shell-in-container, sensitive file reads,
unexpected outbound connections
[ ] Falco alerts routing to Slack/PagerDuty
SECRETS
[ ] etcd encryption at rest enabled (or cloud KMS envelope encryption)
[ ] No plain Kubernetes Secrets committed to Git
[ ] Sealed Secrets or External Secrets Operator in use
COMPLIANCE
[ ] kube-bench run after cluster setup and major changes
[ ] Kubescape Operator deployed for continuous CIS/NSA scanning
[ ] Audit logging enabled and shipping to external system
[ ] Certificate rotation configured (cert-manager)Teams running specialized workloads on Kubernetes, such as blockchain validators, need controls beyond this baseline. The slashing-specific failure modes are covered in our Kubernetes validator security guide.
Conclusion
Kubernetes security best practices are not a one-time configuration, they are an operational discipline that degrades without continuous attention. RBAC permissions accumulate. Images age and develop new vulnerabilities. New workloads deployed without PSA labels create exceptions. The combination of Trivy Operator for continuous vulnerability scanning, Kubescape for continuous CIS compliance, and Falco for runtime detection closes the feedback loop that keeps hardening current.
Start with the highest-impact, lowest-effort controls first: audit your RBAC, enable Pod Security Admission in warn mode on all namespaces, and deploy Trivy Operator. These three steps give you immediate visibility and prevent the most common privilege escalations without breaking anything. Add network policies, Falco, and Kyverno admission policies as the baseline stabilizes. RBAC docs
At The Good Shell we implement Kubernetes security hardening for startups and platform engineering teams. See our DevOps and infrastructure services or our case studies.
For the authoritative reference, the CIS Kubernetes Benchmark and the NSA-CISA Kubernetes Hardening Guide are the two documents that all production kubernetes security best practices are derived from.
Related Articles
