Manage
This section details the management of Canopy nodes.
Disclaimer: Canopy is alpha-stage software. While documentation and guides are provided to assist users, they are offered without warranties, guarantees, or assurances of reliability. Users are solely responsible for any issues that may arise, including but not limited to the loss of funds. Please proceed with caution.
Network Interfaces
Endpoints
Familiarize yourself with the created endpoints from the Quickstart guide
Grafana: https://monitoring.<YOUR_DOMAIN>
Web wallet for CNPY: https://node1.<YOUR_DOMAIN>/wallet
Block Explorer for CNPY: https://node1.<YOUR_DOMAIN>/explorer
Web wallet for CNRY: https://node2.<YOUR_DOMAIN>/wallet
Block Explorer for CNRY: https://node2.<YOUR_DOMAIN>/explorer
Ports
Expose the below ports via firewall
Canopy Node Ports
9001: TCP P2P communication for node1
9002: TCP P2P communication for node2
Load Balancer Ports
80: HTTP traffic (redirects to HTTPS in production)
443: HTTPS traffic (SSL/TLS)
Familiarize yourself with the internal ports
Monitoring Ports
3000: Grafana web interface
9090: Prometheus metrics endpoint
3100: Loki log aggregation
8082: Traefik metrics endpoint
9115: Blackbox exporter metrics
8080: cAdvisor container metrics
9100: Node exporter host metrics
Canopy service ports
50000: Wallet service for node1 (exposed via Traefik)
50001: Explorer service for node1 (exposed via Traefik)
50002: RPC service for node1 (exposed via Traefik)
50003: Admin RPC service for node1 (exposed via Traefik)
40000: Wallet service for node2 (exposed via Traefik)
40001: Explorer service for node2 (exposed via Traefik)
40002: RPC service for node2 (exposed via Traefik)
40003: Admin RPC service for node2 (exposed via Traefik)
Validator Transactions
When managing your Validator, there are a few key transaction types to be aware of
Stake: Register a validator for operation
Edit-Stake: Edit an existing validator
Pause: Temporarily remove an existing validator from active operation
Unpause: Re-enlist a paused validator for operation
Unstake: Permanently remove a validator from active service
These transactions may be executed over CLI, RPC, or the built-in Web Wallet.
STAKE TRANSACTION

Parameters:
Account: Operator address - where subsequent validator transactions like (edit-stake and unstake) should be sent from.
Delegate: Whether or not the validator will be actively operating (signing and producing blocks)
Committees: Chain IDs where restake is allocated
Amount: currency to stake (6 decimals)
Withdrawal:
false
to auto-compound rewards -true
to withdrawal the rewards automatically (comes with penalty <see Governance Params>)Net-addr:
tcp://<YOUR_DOMAIN>
P2P TCP addressOutput: The rewards address
Signer: Account that signs the transaction
Memo:
optional
Txn-fee: Transaction fee (default is pre-filled)
Password: Password of the signer account
Notes:
Signer account is where funds are deducted from. This should be the operator address
Delegate status may not be changed once set
➤ EDIT-STAKE TRANSACTION

Parameters:
Account: Operator address
Committees: Chain IDs where restake is allocated
Amount: currency to stake (6 decimals)
Withdrawal:
false
to auto-compound rewards -true
to withdraw the rewards automatically (comes with penalty <see Governance Params>)Net-addr:
tcp://<YOUR_DOMAIN>
P2P TCP addressOutput: The rewards address
Signer: Output or operator address
Memo:
optional
Txn-fee: Transaction fee (default is pre-filled)
Password: Password of the signer account
Notes:
The operator address may not be edited
Use the 'same-value' inputs that you don't want edited
The output address is only able to be changed by the output address
The stake amount cannot be lowered
Fees are deducted from the signer's address
➤ UNSTAKE TRANSACTION

Parameters:
Account: Operator address
Signer: Output or operator address
Notes:
Unstaking takes 7 days for a Validator and 3 days for a delegate
Fees are deducted from the signer's address
PAUSE TRANSACTION

Parameters:
Account: Operator address
Signer: Output or operator address
Notes:
A validator may be paused for 7 days before it automatically begins unstaking
Fees are deducted from the signer's address
➤ UNPAUSE TRANSACTION

Parameters:
Account: Operator address
Signer: Output or operator address
Notes:
Validators that are unpaused are expected to immediately begin consensus operation for all chains
If your validator was auto-paused, be sure to fix the issue with the validator before unpausing. See Debugging and Support
Fees are deducted from the signer's address
Slashing
⚠️ Why Did My Tokens Go Down? (Understanding Slashing) TL;DR: You got slashed because your validator went offline or misbehaved. That means fewer tokens and missed rewards. Want to avoid this? Keep your validator online at all times.
What Causes a Slash?
Canopy slashes Validators for two main reasons:
Inactivity – If your validator misses 60 out of 100 blocks (each ~20s) within a moving window, you're considered offline and get slashed.
Double Signing – Signing two blocks at the same height for the same chain breaks consensus rules and results in a slash.
⚠️ Subchains may define additional slashing rules, but these two are the main ones on mainnet.
How Bad Is the Slash?
Slash amount for inactivity (non-sign) is 1% of staked tokens
Slash amount for double signing is 10% of staked tokens
A governance parameter called MaxSlashPerCommittee
limits how much a validator may be slashed per block. Canopy caps the slashes and starts ejecting validators from the committee to prevent an escalation.
You’ll lose tokens and be kicked out of the committee temporarily — meaning no rewards, and you’ll need to regain trust.
How to Stay Safe
It’s all about uptime. Here’s how to avoid slashing:
Use auto-restart tools like systemd, supervisord, or Docker’s restart policies.
Set up monitoring with alerts to catch issues early — we already provide a full monitoring stack (Grafana + Prometheus) in the step-by-step setup guide, so make sure it’s running and you’re checking it regularly.
Keep keys secure to avoid double-signing across different instances.
Test failover setups carefully — don’t accidentally run two validators at once.
➜ Grafana + Prometheus Configuration
Basic Scrape Configuration
scrape_configs:
- job_name: 'canopy'
static_configs:
- targets: ['localhost:9090'] # Default Canopy metrics endpoint
scrape_interval: 15s
scrape_timeout: 10s
You can find a working example at deployments/monitoring-stack/monitoring/prometheus
/prometheus.yml
Recommended Recording Rules
groups:
- name: canopy
rules:
# Node Health
- record: canopy:node_up
expr: canopy_node_status == 1
# Peer Health
- record: canopy:peers_total
expr: sum(canopy_peer_total{status="connected"})
# Validator Status Summary
- record: canopy:validators_by_status
expr: sum by (status) (canopy_validator_status)
# Transaction Rates
- record: canopy:transaction_rate
expr: rate(canopy_transaction_received[5m]) + rate(canopy_transaction_sent[5m])
Metric Types and Usage
Gauges
Gauges represent current values that can go up or down:
Node status
Peer counts
Block height
Memory usage
Validator status
Counters
Counters represent monotonically increasing values:
Transaction counts
Block processing counts
Histograms
Histograms track the distribution of values:
Block processing time
Recommended Alerts
groups:
- name: canopy
rules:
# Node Health
- alert: CanopyNodeDown
expr: canopy_node_status == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Canopy node is down"
description: "Node has been down for more than 5 minutes"
# Sync Status
- alert: CanopyNodeNotSynced
expr: canopy_node_syncing_status == 1
for: 15m
labels:
severity: warning
annotations:
summary: "Canopy node is not synced"
description: "Node has been out of sync for more than 15 minutes"
# Peer Health
- alert: CanopyLowPeerCount
expr: sum(canopy_peer_total{status="connected"}) < 3
for: 10m
labels:
severity: warning
annotations:
summary: "Low peer count"
description: "Node has fewer than 3 connected peers"
# Performance
- alert: CanopyHighBlockProcessingTime
expr: rate(canopy_block_processing_seconds_sum[5m]) / rate(canopy_block_processing_seconds_count[5m]) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "High block processing time"
description: "Average block processing time is above 1 second"
Grafana Dashboard Recommendations
You can find the example of the default grafana dashboard implementation at deployments/monitoring-stack/monitoring/grafana/dashboards/canopy
/canopy_dashboard.json
Key Panels to Include
Node Status
Node up/down status
Sync status
Uptime
Peer Network
Total peers
Inbound/outbound peers
Peer connection status
Validator Status
Validator count by status
Validator types
Staking status
Transaction Metrics
Transaction rate
Transaction volume
Transaction types
Performance
Block processing time
Memory usage
CPU usage
Example Queries
# Node Health
canopy_node_status
# Peer Network Health
sum(canopy_peer_total{status="connected"})
# Validator Status Distribution
sum by (status) (canopy_validator_status)
# Transaction Rate
rate(canopy_transaction_received[5m]) + rate(canopy_transaction_sent[5m])
# Block Processing Performance
rate(canopy_block_processing_seconds_sum[5m]) / rate(canopy_block_processing_seconds_count[5m])
Additional Nested Chains
Canopy supports running multiple chains (like CNPY and its first subchain, CNRY) on the same machine.
To set this up, simply follow the step-by-step guide and run the setup.sh script.
This script automatically:
Configures both CNPY and CNRY
Reuses the same validator_key and keystore for both
Sets up each chain with the correct ports and configuration so they can run side by side
How The Script Works Behind the scenes, each node (node1, node2, etc.) is configured with:
- A unique
chainId
(1
for CNPY,2
for CNRY)- A different
listenAddress
(e.g. 0.0.0.0:9001 for CNPY and 0.0.0.0:9002 for CNRY)No manual config needed, this is all handled automatically
This ensures there are no port conflicts, and both chains can run simultaneously using a shared validator identity.
If you plan to add more nested chains in the future, follow the same pattern: - Assign a unique chainId
to each chain - Ensure each node uses a different listenAddress
Debugging and Support
Most logs tagged ERROR should be reported, although some
P2P
ERROR logs are expected as peers churn (especially upon startup).If you're encountering repeated alerts or facing issues while running your node, don't hesitate to reach out.
Community Snapshot and Public RPC Endpoint
This link is a community-maintained endpoint providing RPC access and blockchain snapshots to support fast and easy bootstrapping of Canopy. It is offered as a public resource in the spirit of community goodwill.
Validator Migration Guide
Complete server migration with minimal downtime (5-10 minutes)
Prerequisites
Access to both OLD and NEW servers
Domain DNS management access (Cloudflare recommended)
New server with Ubuntu 22.04+ and root access
Phase 0: Pause Validator
Be sure to pause your existing Validator (if applicable) before beginning the migration process to ensure no slashing risk due to downtime, peering issues, or duplicate identities.
Phase 1: Prepare New Server
1. Install Dependencies on New Server
# Update system
apt update && apt upgrade -y
# Install required packages
apt install -y ca-certificates curl gnupg lsb-release apache2-utils make git tree
# Create Docker keyring directory
mkdir -p /etc/apt/keyrings
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine + Compose
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Install Loki Docker driver for log aggregation
docker plugin install grafana/loki-docker-driver --alias loki
# Configure firewall
ufw allow 22/tcp && ufw allow 9001/tcp && ufw allow 9002/tcp && ufw allow 80/tcp && ufw allow 443/tcp && ufw --force enable
2. Update DNS Records (CRITICAL - Do this first!)
In your DNS provider (Cloudflare recommended):
YOUR_DOMAIN.com A record -> NEW_SERVER_IP
*.YOUR_DOMAIN.com A record -> NEW_SERVER_IP
3. Verify DNS Propagation
# Check DNS propagation from multiple servers
dig @8.8.8.8 YOUR_DOMAIN.com # Google DNS
dig @1.1.1.1 YOUR_DOMAIN.com # Cloudflare DNS
dig @208.67.222.222 YOUR_DOMAIN.com # OpenDNS
# All should return NEW_SERVER_IP
Phase 2: Migration (5-10 minutes downtime)
4. Stop Validator on Old Server
# Navigate to monitoring stack
cd ~/deployments/monitoring-stack
# Stop all services
sudo make down
# Verify everything is stopped
sudo make ps
5. Create Backup on Old Server
# Navigate to home directory
cd ~/
# Create complete backup (excludes large blockchain data for speed)
tar --exclude='*/canopy/*.vlog' \
--exclude='*/canopy/*.sst' \
--exclude='*/logs/*' \
--exclude='*/prometheus/data/*' \
--exclude='*/grafana/data/grafana.db' \
-czf complete-validator.tar.gz deployments/
# Check backup size (should be much smaller without logs/data)
du -sh complete-validator.tar.gz
6. Transfer to New Server
# Transfer backup to new server
scp complete-validator.tar.gz root@NEW_SERVER_IP:~/
# Alternative: Use rsync for better progress tracking
# rsync -avz --progress complete-validator.tar.gz root@NEW_SERVER_IP:~/
7. Deploy on New Server
# Extract backup
cd ~/
tar -xzf complete-validator.tar.gz
# Fix permissions (CRITICAL STEP)
chmod +x ~/deployments/docker_image/entrypoint.sh
find ~/deployments/ -name "*.sh" -exec chmod +x {} \;
find ~/deployments/ -name "cli" -exec chmod +x {} \;
# Navigate to monitoring stack
cd ~/deployments/monitoring-stack
# Start with fresh snapshots (downloads latest blockchain data)
sudo make start_with_snapshot
8. Verify Deployment
# Check all containers are running
sudo make ps
# Check logs for any errors
sudo make logs | head -30
# Test web interfaces
curl -k https://YOUR_DOMAIN.com
curl -k https://monitoring.YOUR_DOMAIN.com
# Check validator status
docker logs canopy-validator-node1-1 | tail -20
docker logs canopy-validator-node2-1 | tail -20
Phase 3: Post-Migration Verification
9. Access Web Interfaces
Once services are running, verify these endpoints:
Grafana Dashboard:
https://monitoring.YOUR_DOMAIN.com
Node 1 Wallet:
https://node1.YOUR_DOMAIN.com/wallet
Node 1 Explorer:
https://node1.YOUR_DOMAIN.com/explorer
Node 2 Wallet:
https://node2.YOUR_DOMAIN.com/wallet
Node 2 Explorer:
https://node2.YOUR_DOMAIN.com/explorer
10. Monitor Sync Status
# Watch sync progress in logs
sudo make logs -f
# Check validator status in web wallet
# Navigate to wallet interfaces and check "monitoring" tab
11. Verify Staking Status
Log into your web wallet interfaces
Check that your validator status shows as "STAKED"
Monitor for 10-15 minutes to ensure stability
Troubleshooting
Common Issues and Solutions
Permission Denied Errors:
# Fix all executable permissions
find ~/deployments/ -type f -name "*.sh" -exec chmod +x {} \;
chmod +x ~/deployments/docker_image/entrypoint.sh
DNS Not Resolving:
# Check DNS propagation
nslookup YOUR_DOMAIN.com
# Wait for DNS to propagate (1 hour max)
SSL Certificate Issues:
# Check Traefik logs
docker logs traefik-1
# SSL certificates auto-renew via Let's Encrypt
# May take 2-3 minutes after DNS propagation
Containers Not Starting:
# Check detailed logs
sudo make logs SERVICE_NAME
# Common fixes:
sudo make down
sudo make start_with_snapshot
Important Notes
Critical Points
Pause: Always pause your validator before migration
DNS First: Always update DNS records before migration
Permissions: Always run chmod commands after extracting backup
Snapshots: Use
start_with_snapshot
for fastest syncMonitoring: Watch logs for first 10-15 minutes after migration
Expected Timeline
DNS Propagation: 1-2 minutes (Cloudflare)
Service Stop: 30 seconds
File Transfer: 2-4 minutes (depending on connection)
Service Start: 2-3 minutes
SSL Certificate: 1-2 minutes
Total Downtime: 5-10 minutes
Post-Migration
Monitor validator status for 24 hours
Backup the new server setup
Update any monitoring alerts with new IP
Test all web interfaces thoroughly
Recovery Plan
If migration fails, quickly restore old server:
# On OLD server
cd ~/deployments/monitoring-stack
sudo make start_with_snapshot
# Revert DNS records to OLD_SERVER_IP
Your validator will resume from where it left off with minimal impact.
Migration Complete! 🎉
Your Canopy validator is now running on the new server with fresh blockchain data and preserved validator identity.
Last updated