
Fair Use Policy explains how providers keep shared proxy infrastructure usable for everyone by managing load, bursts, and rotation patterns. For a wider view of how all terms fit together, check proxy provider policies.
Scope: What Fair Use Regulates
Fair use governs consumption patterns on shared systems: concurrency, rate, burst windows, rotation cadence, and session TTL. It is about fairness on the network layer, not the legality of your workload.
Criminal or forbidden content belongs to the ToS or AUP, not here. This page tells you how to be a predictable neighbor so the IP pool, gateways, and upstreams remain healthy for all customers.
Why It Exists
Fair use prevents a single tenant from degrading shared resources for others. It preserves pool reputation and keeps carriers, ASNs, and gateways stable.
Without guardrails, noisy workloads burn subnets, trigger upstream mitigation, and reduce everyone’s success rate. Providers publish fair use so intervention is consistent and fast.
Shared Infrastructure 101
Even with dedicated IPs, you still share gateways, ASN routes, and bandwidth schedulers. Soft and hard limits ensure no single tenant dominates.
Bottlenecks appear at multiple layers: IP pool reputation, gateway bandwidth, per-port concurrency, session managers, and carrier quotas. Design your client to respect all of them.
The Core Controls Providers Use
Fair use relies on a small set of predictable controls. Know them and plan around them.
- Concurrency ceilings per IP or per account
- Request rate shaping with sliding windows
- Burst gates that smooth spikes over short intervals
- Session TTL and sticky-session limits
- Rotation hygiene rules (minimum lifetime, cooldowns)
- Product or geo gates when a range is under remediation
What This Policy Is Not
Fair use is not a crime list. Illegal actions, DMCA-style issues, and target bans are handled under ToS/AUP and enforcement playbooks.
This page focuses on how much and how fast you may use shared infrastructure. If you need legal boundaries, read the ToS/AUP documents.
Designing For Fair Use: Quick Start
A compliant client sends steady, shaped traffic and adapts to throttle signals automatically. Treat limits as APIs, not suggestions.
Implement per-worker ceilings, exponential backoff with jitter, circuit breakers on error spikes, and sane rotation (long enough to avoid churn but short enough to avoid pinning a burned IP). Keep audit logs for fast reviews.
Limits You Will Encounter
Providers publish numeric usage limits per product and plan. Always read those before you launch.
Expect ceilings on concurrent sessions, requests per second, maximum burst in a short window, and sticky-session TTL. Expect rotation rules such as “no pinning for less than N seconds” or “minimum lifetime before rotation.”
Rotation Hygiene
Rotation rules keep the pool stable and reduce collateral blocks. Your goal is smooth, not frantic, rotation.
Prefer session stickiness with a reasonable TTL over per-request churn. Avoid back-to-back gateway reconnects. If a target blocks an IP, rotate once, not repeatedly within the same second.
Burst Management
Short spikes are normal, stampedes are not. Fair use spreads load across time windows.
Use token buckets or leaky-bucket schedulers to cap short-lived floods. Spread job starts across seconds and minutes. For cron-style jobs, add random skew to avoid synchronized surges.
Concurrency Planning
Most violations come from runaway worker counts. Concurrency should follow plan limits and real target tolerance.
Bind workers to a ceiling per plan and per gateway. Increase gradually and prefer horizontal scaling across gateways or regions over vertical spikes on a single one.
Session TTL and Sticky Sessions
Sticky sessions reduce handshake overhead and stabilize identity at the target, but they have upper bounds.
Choose TTLs that match target behavior and published limits. For HTTP CONNECT and SOCKS5, keep-alive is helpful until it hits provider TTLs. For QUIC/UDP, only use it if your product supports it and it is stated as allowed.
Signals Providers Watch (Usage, not Content)
Providers observe patterns that indicate stress on shared layers. These are usage signals, not moral judgments.
Watch for sustained 4xx/5xx clusters, sharp concurrency jumps, rapid reconnects, rotation thrash, and retry storms. Expect shaping when these appear. Your client should back off, extend TTLs slightly, or stagger workers.
Shaping and Throttle: How It Feels
When shaping begins, you will see added latency, queued requests, 429s, or temporary caps on sessions. This is corrective, not punitive.
Treat throttle as feedback. Reduce rate, spread retries, lengthen sticky TTL by a modest step, and verify that error budgets fall back to normal.
Recovery After An Incident
Most providers restore full access once your metrics stabilize. Arrive with evidence, not promises.
Share a short timeline, the configuration diff (before vs after), sampled logs with timestamps and endpoints, and a graph of rate/concurrency before and after your fix. Keep changes reversible and documented.
Sizing Your Plan For Fair Use
Right-size your plan so normal days sit below limits and peaks stay within burst windows. Mixing models can help.
If your baseline fits a subscription or per-IP pack but marketing spikes exceed burst gates, add a small per-GB or on-demand pool for overflow. Keep overflow isolated in a separate gateway to protect steady jobs.
Environment and Target Etiquette
Fair use expects you to be a predictable neighbor to targets as well as to the provider.
Respect crawl delays, retry-after headers, and target-side block signals. Keep allowlists and use exponential backoff on 429s. If a target explicitly disallows access, remove it instead of fighting blocks.
Monitoring Checklist
A minimal set of charts and alerts will prevent most incidents.
- Active sessions vs allowed ceiling
- Requests per second with p50 and p95 latency
- Error rates split by 4xx and 5xx
- Rotation cadence and session TTL distribution
- Reconnects per minute and per gateway
- Throttle events and queue depth
Implementation Cheatsheet
Two lines first: enforce ceilings, then adapt to feedback. Below is a compact reference you can drop into CI.
- Concurrency: cap workers per plan and per gateway
- Rate: global token bucket with per-worker sub-buckets
- Backoff: exponential with jitter, max retry window bounded
- Rotation: minimum lifetime M seconds, cooldown N seconds
- Sticky TTL: start modest, grow by small steps on soft 429s
- Circuit breaker: trip on error-rate spike, auto half-open with probes
Signal to Action Table
If you see these, adjust load and rotation first. Ask support only after metrics stabilize.
| Signal (usage) | Likely provider response | Your immediate move |
| 429 or explicit throttle flags | Rate shaping on gateway | Drop rate, add jitter, lengthen TTL slightly |
| Concurrency spike | Session caps or queueing | Reduce workers, stagger starts, spread across gateways |
| Rotation thrash | Temporary gate on rotation | Increase TTL, slow reconnects |
| Reconnect storms | Transport-level dampening | Add backoff, pool keep-alives where supported |
| Sustained 5xx | Scoped block on stressed range | Lower rate, switch range or region, open a ticket |
FAQ
Fair use is about predictable usage patterns on shared infrastructure. Build for steady flow and graceful degradation.
Does fair use include numeric limits?
Yes, but numbers are per product and plan. Read the limits page for your plan and load them into your client at startup so behavior adapts automatically.
Can I pin a single IP for weeks?
Only if your product allows long sticky TTLs. Many plans expect periodic rotation to protect pool health. Follow the stated TTL and rotation rules.
Are UDP or QUIC allowed?
Only if your product states support for them. If unsupported, large UDP bursts will be shaped or blocked. Use TCP-based CONNECT where required.
How do I avoid synchronized spikes?
Randomize start times, distribute jobs across minutes, and apply token buckets with small refill intervals to smooth traffic.
What if I exceeded limits by mistake?
Reduce pressure, ship logs and a short timeline, and confirm your fixes. Providers usually restore full capacity after metrics stabilize.
Related in this topic: