Solana RPC Node Setup: The Ultimate Production Guide for 2026

Solana RPC node setup is the infrastructure decision that sits behind every wallet balance check, every transaction submission, every dApp query, and every trading bot on the network. When your RPC node responds in 30ms, applications feel instant. When it falls behind by 50 slots, wallets show wrong balances, transactions fail, and users blame the product.

Understanding solana rpc node setup correctly from the start prevents the operational problems that make teams abandon self-hosted infrastructure and fall back on shared endpoints with rate limits they cannot control. This guide covers the complete solana rpc node setup for 2026: how an RPC node differs from a validator, hardware requirements, full node vs archive node decision, system tuning, startup flags, Nginx reverse proxy and rate limiting, Geyser plugin configuration, and the monitoring stack that tells you when something is wrong before your users do.

RPC Node vs Validator: The Critical Distinction

Before configuring a solana rpc node setup, understand precisely what an RPC node is and is not, because conflating it with a validator creates hardware under-provisioning and operational confusion.

A Solana validator participates in consensus: it votes on blocks, produces blocks when assigned as leader, and earns inflation rewards and MEV revenue. It pays approximately 1.1 SOL/day in vote transaction fees regardless of stake level.

A Solana RPC node runs exactly the same Agave software but with three differences: it does not have a funded vote account, it does not participate in consensus, and it does not earn staking rewards. It serves JSON-RPC and WebSocket API requests to applications. The --no-voting flag is what converts the validator software into an RPC node.

What this means operationally:

No vote fees. A solana rpc node setup costs nothing in SOL to operate at the protocol level. The costs are purely hardware, hosting, and bandwidth. This is the primary economic reason teams run their own RPC nodes to avoid the rate limits and shared infrastructure of public endpoints without the $50,000/year vote fee floor that validator operation carries.

What it does not mean:

Lower hardware requirements. This is the most common misconception. RPC nodes actually require more RAM and more storage than consensus validators. A validator maintains current state and a limited ledger window. An RPC node is expected to answer arbitrary queries: account state, transaction history, program accounts, which requires substantially larger working memory and deeper ledger retention.

Never run RPC and validator workloads on the same machine. The query load from RPC requests interferes with the time-sensitive consensus operations of a validator, causing vote latency and missed credits.

Full Node vs Archive Node: The First Decision in Solana RPC Node Setup

Before any hardware or configuration decision, determine which type of solana rpc node setup you need. The distinction drives hardware sizing by an order of magnitude.

Full RPC node: Stores current ledger state plus approximately 2-3 days of recent slots. Sufficient for real-time use cases: wallet balance checks, transaction submission, live dApp queries, trading bots, monitoring systems. The vast majority of solana rpc node setups in production are full nodes.

Archive RPC node: Stores the complete ledger history from genesis. Required for block explorers, compliance and AML systems, historical analytics, backtesting trading strategies, ML data pipelines. As of 2026, an unpruned Solana ledger exceeds 400TB making full archive self-hosting a significant infrastructure commitment. Most teams needing archive access use BigTable integration rather than storing the full history locally.

The decision framework:

  • Do you need getTransaction for transactions older than 2-3 days? Archive node or BigTable integration.
  • Do you need getSignaturesForAddress for historical activity? Archive node or BigTable integration.
  • Do you need real-time balance queries, live transaction submission, account monitoring? Full node.
  • Are you serving a high-traffic public API? Multiple full nodes behind a load balancer.

For the majority of teams running a solana rpc node setup for a specific application or protocol, a full node with appropriate ledger retention covers all requirements.

Hardware Requirements for Solana RPC Node Setup

RPC nodes have higher RAM and storage requirements than validators. The hardware for solana rpc node setup must be sized for query concurrency, not just ledger processing.

CPU

The CPU requirements are similar to a production validator: single socket, high core count, high clock speed. Solana RPC nodes handle signature verification, state deserialization, concurrent connection management, and heavy getProgramAccounts queries simultaneously.

Minimum viable (light traffic):

  • 16-24 core CPU, single socket.
  • 3.0GHz+ base clock.

Production mainnet:

  • AMD EPYC 9354P or Threadripper PRO 7965WX, single socket.
  • 24-32 cores, 3.5GHz+ base clock, 4GHz+ boost.
  • AVX2 and SHA extensions required.

Single socket is mandatory. Dual-socket configurations introduce NUMA latency that degrades query performance at Solana’s concurrency levels.

RAM

RAM is where solana rpc node setup diverges most sharply from validator requirements. Under heavy concurrent query load, particularly getProgramAccounts calls, which load entire program state into memory RAM becomes the primary bottleneck.

Full node (production mainnet):

  • Minimum: 384GB ECC DDR4/DDR5
  • Recommended: 512GB ECC
  • With --account-index enabled: 1TB+ recommended.

--account-index creates in-memory indices that dramatically speed up getProgramAccounts queries but consumes proportionally more RAM. For nodes serving DeFi applications that make frequent program account queries, the index is worth the memory cost. For nodes primarily handling transaction submission and balance checks, it is optional.

Archive node:

  • 512GB minimum, 1TB recommended.
  • Archive nodes handling concurrent historical queries can saturate even 1TB under load.

Storage

Storage is the most complex dimension of solana rpc node setup and requires the same split disk configuration as a validator accounts and ledger on separate physical NVMe drives.

Full node storage layout:

DiskSizePurposePerformance requirement
OS500GB NVMeOperating system, logsStandard enterprise NVMe
Accounts1TB+ NVMeAccount state, snapshotsHigh IOPS random access
Ledger2TB+ NVMeRecent ledger (2-3 days)High sequential write throughput
Snapshots500GB NVMe (optional separate)Snapshot stagingSequential write

Archive node storage layout:

  • Accounts: 2TB+ NVMe
  • Ledger (full history): 10TB+ NVMe (growing ~90TB/year — most teams use BigTable for deep history).
  • OS: 500GB NVMe

Enterprise NVMe with high TBW (Total Bytes Written) ratings is mandatory for both accounts and ledger disks. Consumer NVMe drives will degrade within weeks under Solana’s sustained I/O load.

Network

  • Minimum: 1Gbps symmetric.
  • Production: 10Gbps strongly recommended.
  • Public-facing RPC nodes serving significant traffic: 10Gbps minimum due to Solana’s P2P gossip bandwidth plus query traffic.

A dedicated public IP is required. Do not run an RPC node behind NAT.

Step 1: Server and Disk Preparation

The solana rpc node setup follows the same base system preparation as the validator setup Ubuntu 24.04, dedicated sol user, NVMe mounts with different disk sizing.

# Create sol user
sudo adduser sol
sudo usermod -aG sudo sol
sudo su - sol

# Update packages
sudo apt update && sudo apt upgrade -y

# Install dependencies
sudo apt install -y libssl-dev libudev-dev pkg-config zlib1g-dev \
  llvm clang cmake make libprotobuf-dev protobuf-compiler nginx

Format and mount disks:

# Ledger disk (2TB+ NVMe)
sudo mkfs.ext4 -F /dev/nvme1n1
sudo mkdir -p /mnt/ledger
sudo mount /dev/nvme1n1 /mnt/ledger
sudo chown -R sol:sol /mnt/ledger
echo "/dev/nvme1n1 /mnt/ledger ext4 defaults,noatime 0 0" | sudo tee -a /etc/fstab

# Accounts disk (1TB+ NVMe)
sudo mkfs.ext4 -F /dev/nvme2n1
sudo mkdir -p /mnt/accounts
sudo mount /dev/nvme2n1 /mnt/accounts
sudo chown -R sol:sol /mnt/accounts
echo "/dev/nvme2n1 /mnt/accounts ext4 defaults,noatime 0 0" | sudo tee -a /etc/fstab

# Snapshots disk (optional, 500GB+)
sudo mkfs.ext4 -F /dev/nvme3n1
sudo mkdir -p /mnt/snapshots
sudo mount /dev/nvme3n1 /mnt/snapshots
sudo chown -R sol:sol /mnt/snapshots
echo "/dev/nvme3n1 /mnt/snapshots ext4 defaults,noatime 0 0" | sudo tee -a /etc/fstab

Step 2: System Tuning for RPC Workloads

System tuning for a solana rpc node setup has additional requirements beyond the validator tuning RPC nodes handle high concurrent connection counts and must survive request spikes without running out of file descriptors or socket buffers.

sudo bash -c "cat > /etc/sysctl.d/21-solana-rpc.conf << EOF
# UDP buffer sizes for Solana P2P
net.core.rmem_default = 134217728
net.core.rmem_max = 134217728
net.core.wmem_default = 134217728
net.core.wmem_max = 134217728

# TCP buffer sizes for high-concurrency RPC
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.core.netdev_max_backlog = 250000
net.ipv4.tcp_max_syn_backlog = 30000

# Connection handling
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65535

# Memory mapped files
vm.max_map_count = 1000000

# File descriptor limits
fs.nr_open = 1000000
fs.file-max = 1000000
EOF"

sudo sysctl -p /etc/sysctl.d/21-solana-rpc.conf

# File descriptor limits for sol user
sudo bash -c "cat >> /etc/security/limits.conf << EOF
sol soft nofile 1000000
sol hard nofile 1000000
EOF"

# Disable swap
sudo swapoff -a
sudo sed -i '/swap/s/^/#/' /etc/fstab

# NVMe I/O scheduler
echo none | sudo tee /sys/block/nvme1n1/queue/scheduler
echo none | sudo tee /sys/block/nvme2n1/queue/scheduler

Step 3: Install Agave and Generate Identity Keypair

# Install Agave (as sol user)
sh -c "$(curl -sSfL https://release.anza.xyz/v3.1.13/install)"
echo 'export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc

# Verify
agave-validator --version

# Generate identity keypair
# Note: no vote account keypair needed for RPC nodes
solana-keygen new -o ~/rpc-identity-keypair.json

# Configure CLI
solana config set \
  --url https://api.mainnet-beta.solana.com \
  --keypair ~/rpc-identity-keypair.json

The identity keypair for an RPC node is primarily used for gossip protocol identity on the network. It does not need SOL, does not need to be funded, and does not need to be kept as secure as a validator identity keypair, though best practices apply.

Step 4: Startup Script: The Core of Solana RPC Node Setup

The startup script configuration is where solana rpc node setup diverges most significantly from a validator setup. The key flags are different, and the tradeoffs matter.

mkdir -p /home/sol/bin
cat > /home/sol/bin/rpc-node.sh << 'EOF'
#!/bin/bash

exec agave-validator \
  --identity /home/sol/rpc-identity-keypair.json \
  --ledger /mnt/ledger \
  --accounts /mnt/accounts \
  --snapshots /mnt/snapshots \
  --log /home/sol/solana-rpc.log \
  \
  # RPC-specific flags
  --no-voting \
  --full-rpc-api \
  --rpc-port 8899 \
  --rpc-bind-address 127.0.0.1 \
  --enable-rpc-transaction-history \
  --enable-extended-tx-metadata-storage \
  --rpc-max-multiple-accounts 100 \
  \
  # Network
  --dynamic-port-range 8000-8020 \
  --entrypoint entrypoint.mainnet-beta.solana.com:8001 \
  --entrypoint entrypoint2.mainnet-beta.solana.com:8001 \
  --entrypoint entrypoint3.mainnet-beta.solana.com:8001 \
  --entrypoint entrypoint4.mainnet-beta.solana.com:8001 \
  --entrypoint entrypoint5.mainnet-beta.solana.com:8001 \
  \
  # Known validators for trusted snapshot download
  --known-validator 7Np41oeYqPefeNQEHSv1UDhYrehxin3NStELsSKCT4K2 \
  --known-validator GdnSyH3YtwcxFvQrVVJMm1JhTS4QVX7MFsX56uJLUfiZ \
  --known-validator DE1bawNcRJB9rVm3buyMVfr8mBEoyendZYBPge9mMxNB \
  --only-known-rpc \
  --expected-genesis-hash 5eykt4UsFv8P8NJdTREpY1vzqKqZKvdpKuc147dw2N9d \
  \
  # Storage management
  --limit-ledger-size 200000000 \
  --wal-recovery-mode skip_any_corrupted_record \
  \
  # RPC node specific
  --private-rpc \
  --no-port-check
EOF

chmod +x /home/sol/bin/rpc-node.sh

Critical RPC-specific flags explained:

--no-voting : disables consensus participation. This is what makes it an RPC node rather than a validator. Without this flag, the node will attempt to vote (and fail without a funded vote account).

--full-rpc-api : enables the complete set of RPC methods. Without this flag, some methods like getProgramAccounts are disabled.

--rpc-bind-address 127.0.0.1 : binds the RPC port to localhost only. Nginx will proxy requests from the public interface. Never expose 8899 directly to the internet.

--enable-rpc-transaction-history : enables storage of transaction history for RPC queries. Required for getTransaction and getSignaturesForAddress. Consumes additional ledger storage.

--enable-extended-tx-metadata-storage : stores extended transaction metadata including log messages. Required for getTransaction with full metadata. Consumes more storage than basic history.

--limit-ledger-size 200000000 : increases ledger retention beyond the default 500GB setting. At 200 million shreds, this retains approximately 2-3 days of history for a full node. Increase for longer retention at the cost of disk space.

--private-rpc : does not publish the node’s RPC port in the Solana gossip network. Prevents your node from appearing as a public RPC endpoint to other operators.

--rpc-max-multiple-accounts 100 : limits the number of accounts that can be queried in a single getMultipleAccounts call. Prevents individual requests from monopolizing node resources.

Step 5: Systemd Service

sudo bash -c "cat > /etc/systemd/system/solana-rpc.service << EOF
[Unit]
Description=Solana RPC Node
After=network.target

[Service]
Type=simple
User=sol
Group=sol
ExecStart=/home/sol/bin/rpc-node.sh
Restart=on-failure
RestartSec=10
LimitNOFILE=1000000
Environment=RUST_LOG=solana=info

[Install]
WantedBy=multi-user.target
EOF"

sudo systemctl daemon-reload
sudo systemctl enable solana-rpc
User=sol
Group=sol
ExecStart=/home/sol/bin/rpc-node.sh
Restart=on-failure
RestartSec=10
LimitNOFILE=1000000
Environment=RUST_LOG=solana=info

[Install]
WantedBy=multi-user.target
EOF"

sudo systemctl daemon-reload
sudo systemctl enable solana-rpc

Step 6: Nginx Reverse Proxy and Rate Limiting

Exposing port 8899 directly to the internet is the most common security mistake in solana rpc node setup. An unprotected RPC port will be abused immediately: bot operators and researchers scan for open Solana RPC ports and send resource-intensive queries like getProgramAccounts against all discovered endpoints.

Nginx provides the reverse proxy, rate limiting, authentication, and TLS termination layer that every production solana rpc node setup needs.

# /etc/nginx/sites-available/solana-rpc
upstream solana_rpc {
    server 127.0.0.1:8899;
    keepalive 64;
}

# Rate limiting zones
limit_req_zone $binary_remote_addr zone=rpc_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=ws_limit:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

server {
    listen 80;
    server_name rpc.yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name rpc.yourdomain.com;

    ssl_certificate /etc/ssl/certs/rpc.yourdomain.com.crt;
    ssl_certificate_key /etc/ssl/private/rpc.yourdomain.com.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;

    # Connection limits
    limit_conn conn_limit 50;
    client_max_body_size 1m;
    client_body_timeout 30s;

    # HTTP JSON-RPC
    location / {
        limit_req zone=rpc_limit burst=200 nodelay;

        proxy_pass http://solana_rpc;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_connect_timeout 5s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        # Block expensive queries without API key
        # Optional: restrict getProgramAccounts to authenticated users
        if ($request_method = POST) {
            set $body $request_body;
        }
    }

    # WebSocket
    location /ws {
        limit_req zone=ws_limit burst=20 nodelay;

        proxy_pass http://127.0.0.1:8900;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}
# Enable and test
sudo ln -s /etc/nginx/sites-available/solana-rpc /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx

Rate limiting strategy:

100 requests/second per IP with burst of 200 is appropriate for most production use cases. If you are running a private internal RPC node, increase these limits significantly. If you are running a public endpoint, consider adding API key authentication via Nginx to differentiate trusted and untrusted traffic.

The WebSocket rate limit is intentionally lower: WebSocket connections are long-lived and a single connection can generate substantial ongoing load.

Step 7: Firewall Configuration

sudo ufw default deny incoming
sudo ufw default allow outgoing

# SSH
sudo ufw allow 22/tcp

# Solana P2P
sudo ufw allow 8000:8020/udp
sudo ufw allow 8000:8020/tcp

# Nginx (public HTTPS and HTTP)
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Never open 8899 or 8900 directly
# All RPC traffic goes through Nginx on 443

sudo ufw enable

Step 8: Start and Verify Sync

# Start the RPC node
sudo systemctl start solana-rpc

# Monitor initial sync
tail -f /home/sol/solana-rpc.log

# Check sync progress
solana --url http://127.0.0.1:8899 catchup \
  $(solana-keygen pubkey /home/sol/rpc-identity-keypair.json)

# Verify RPC is responding
curl -s http://127.0.0.1:8899 \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getSlot"}' | jq

# Verify through Nginx
curl -s https://rpc.yourdomain.com \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}' | jq

# Check block height vs mainnet
curl -s https://api.mainnet-beta.solana.com \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getSlot"}' | jq

Your node is ready when the slot number from your local endpoint matches the mainnet slot within 5-10 slots.

Step 9: Geyser Plugin for Real-Time Data Streaming

For applications that need real-time account updates, transaction streaming, or slot notifications, standard JSON-RPC polling introduces latency and load. The Geyser plugin system is the solana rpc node setup extension that enables gRPC streaming of blockchain data directly from the node to application backends.

Yellowstone gRPC (maintained by Helius) is the most widely deployed Geyser plugin in production. It streams slot updates, block data, transaction notifications, and account updates via gRPC with sub-block latency.

# Clone Yellowstone gRPC plugin
git clone https://github.com/helius-labs/yellowstone-grpc.git
cd yellowstone-grpc

# Build
cargo build --release

# Copy the plugin shared library
cp target/release/libyellowstone_grpc_geyser.so /home/sol/

Configure the plugin:

// /home/sol/yellowstone-geyser-config.json
{
  "libpath": "/home/sol/libyellowstone_grpc_geyser.so",
  "grpc": {
    "address": "127.0.0.1:10000",
    "max_decoding_message_size": "4194304",
    "snapshot_plugin_channel_capacity": null,
    "snapshot_client_channel_capacity": "50000000",
    "channel_capacity": "100000",
    "unary_concurrency_limit": 100,
    "unary_disabled": false,
    "filters": {}
  },
  "log": {
    "level": "info"
  },
  "accounts_selector": {
    "accounts": []
  },
  "transaction_selector": {
    "mentions": []
  },
  "block_fail_action": "log"
}

Add the plugin to the startup script:

# Add to rpc-node.sh
--geyser-plugin-config /home/sol/yellowstone-geyser-config.json

With Yellowstone running, your applications can subscribe to real-time streams:

# Subscribe to slot updates
grpcurl -plaintext \
  -d '{"slots":{}}' \
  127.0.0.1:10000 \
  geyser.Geyser/Subscribe

When to use Geyser vs standard RPC:

Use standard RPC for transaction submission, balance queries, and one-off state reads. Use Geyser for real-time account monitoring, MEV infrastructure, analytics pipelines, and any application that previously used polling loops against WebSocket subscriptions. Geyser eliminates the overhead of maintaining WebSocket connections for high-throughput streaming use cases.

Step 10: Monitoring for Solana RPC Node Setup

A production solana rpc node setup requires monitoring across three layers: node sync state, RPC request performance, and system resources.

Key metrics to monitor:

# Check node health via RPC
curl -s http://127.0.0.1:8899 \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}'
# Returns: {"result":"ok"} when healthy, error when behind

# Check slot lag
# Compare local slot vs mainnet slot — alert if delta > 50
LOCAL_SLOT=$(curl -s http://127.0.0.1:8899 -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getSlot"}' | jq '.result')

MAINNET_SLOT=$(curl -s https://api.mainnet-beta.solana.com -X POST \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"getSlot"}' | jq '.result')

echo "Slot lag: $((MAINNET_SLOT - LOCAL_SLOT))"

Prometheus alerting rules for solana rpc node setup:

groups:
- name: solana_rpc
  rules:
  - alert: SolanaRPCNodeBehind
    expr: solana_rpc_slot_lag > 50
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "Solana RPC node is {{ $value }} slots behind mainnet"

  - alert: SolanaRPCHighErrorRate
    expr: rate(nginx_http_requests_total{status=~"5.."}[5m]) > 0.05
    for: 3m
    labels:
      severity: warning
    annotations:
      summary: "RPC error rate above 5% on {{ $labels.instance }}"

  - alert: SolanaRPCDiskSpaceCritical
    expr: |
      node_filesystem_avail_bytes{mountpoint="/mnt/ledger"} /
      node_filesystem_size_bytes{mountpoint="/mnt/ledger"} < 0.15
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Ledger disk at {{ $value | humanizePercentage }} capacity"

  - alert: SolanaRPCHighMemoryPressure
    expr: |
      node_memory_MemAvailable_bytes /
      node_memory_MemTotal_bytes < 0.10
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "RPC node memory critically low - getProgramAccounts queries at risk"

For the full Prometheus and Grafana monitoring stack configuration including blockchain-specific metrics, see our blockchain node monitoring guide.

Common Failures in Solana RPC Node Setup

Node falls behind and never catches up: The most common problem on first launch. Usually caused by insufficient disk I/O performance or downloading a stale snapshot. Fix: stop the node, clear the ledger directory, restart without --no-snapshot-fetch to force a fresh snapshot download.

sudo systemctl stop solana-rpc
rm -rf /mnt/ledger/*
sudo systemctl start solana-rpc

getProgramAccounts queries timing out: Indicates insufficient RAM for the accounts index or too many concurrent heavy queries. Fix: increase --account-index coverage, add RAM, or rate limit heavy query methods at the Nginx layer.

RPC responding but returning stale data: Slot lag is high but the node appears healthy. Usually disk I/O saturation causing ledger processing to fall behind despite the node running. Fix: check iostat -x 1 for disk utilization. If consistently above 80%, upgrade NVMe or move to split disk configuration if not already done.

WebSocket connections dropping: Nginx proxy timeout too short or too many concurrent connections. Fix: increase proxy_read_timeout and proxy_send_timeout in Nginx config, and check limit_conn thresholds.

Node crashes after upgrade: Agave updates sometimes require clearing the snapshot and accounts cache when the data format changes. Fix: clear accounts and snapshots directories before restarting after major version upgrades.

sudo systemctl stop solana-rpc
rm -rf /mnt/accounts/accounts
rm -rf /mnt/snapshots/*
sudo systemctl start solana-rpc

Self-Hosted vs Managed RPC: When Each Makes Sense

A complete discussion of solana rpc node setup should address when self-hosting is the right choice and when managed endpoints are better.

Self-host when:

  • You need unlimited request rates without paying per-call fees.
  • You need Geyser streaming for real-time data pipelines.
  • You require guaranteed data privacy, requests do not leave your infrastructure.
  • You are operating MEV infrastructure that cannot use shared endpoints.
  • You need custom RPC methods or extended transaction metadata not available from providers.

Use managed endpoints when:

  • Your request volume is moderate and per-call pricing is cheaper than server costs.
  • You need instant geographic redundancy without operating multiple nodes.
  • You need archive access without the storage investment of self-hosting.
  • Your team lacks the operational bandwidth for 24/7 infrastructure management.

The break-even point between self-hosted and managed endpoints varies by provider and request volume, but for most teams sending over 10 million requests per day, self-hosting a production solana rpc node setup is economically justified.

Conclusion

A production solana rpc node setup is not just installing Agave with --no-voting. It is a full infrastructure stack: correctly sized hardware with split NVMe layout, system tuning for high-concurrency RPC workloads, Nginx reverse proxy with rate limiting, Geyser plugin for real-time streaming where needed, and monitoring that catches slot lag before applications notice it.

The operators who run stable, long-lived RPC infrastructure do so because they treated the initial solana rpc node setup with the same rigour as validator operations, because for the applications depending on it, the RPC layer is the network.

At The Good Shell we design and operate Solana RPC infrastructure for Web3 protocols, DeFi applications, and teams that need dedicated endpoints without building and running the stack themselves. See our Web3 infrastructure services or our case studies.

For the authoritative official documentation on RPC node configuration, the Agave RPC node docs cover all supported flags and their current behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *