Global proxy — every request through AvocadoVPN

In settings.py:
import os

AVP_KEY = os.environ['AVP_KEY']
AVP_SECRET = os.environ['AVP_SECRET']

# Enable the built-in HttpProxyMiddleware
DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
}

# Set the proxy for all requests
HTTPS_PROXY = f"http://{AVP_KEY}:{AVP_SECRET}@api.atlasvpn.live:7777"
HTTP_PROXY = HTTPS_PROXY
Scrapy respects HTTP_PROXY / HTTPS_PROXY env vars by default — no extra middleware needed if you set the env vars before scrapy crawl. Set meta['proxy'] on each Request. Gives you full control over which request gets which sticky session:
import scrapy

class PriceSpider(scrapy.Spider):
    name = 'price'

    def start_requests(self):
        for i, sku in enumerate(self.skus):
            session_tag = f"run-{self.run_id}-sku-{sku}"
            auth = f"{self.avp_key}-country-us-session-{session_tag}:{self.avp_secret}"
            yield scrapy.Request(
                url=f"https://target.com/sku/{sku}",
                meta={'proxy': f"http://{auth}@api.atlasvpn.live:7777"},
                callback=self.parse_price,
            )
Each SKU gets its own sticky session — useful when the target site rate-limits per-IP-per-URL.

Handling 429 / 502 retries

Scrapy has built-in retry handling. Make sure your settings.py retries AvocadoVPN’s transient codes:
RETRY_ENABLED = True
RETRY_TIMES = 3
RETRY_HTTP_CODES = [429, 502, 504, 500, 503]
DOWNLOAD_DELAY = 0.5  # be nice to the target site
AUTOTHROTTLE_ENABLED = True
AUTOTHROTTLE_TARGET_CONCURRENCY = 2  # ≤ tier rate limit
Don’t retry 407 / 402 / 403 from AvocadoVPN — those are deterministic and retrying makes the rate-limit situation worse.

Random sticky session per request

import uuid

class RotatingSpider(scrapy.Spider):
    ...
    def start_requests(self):
        for url in self.urls:
            tag = uuid.uuid4().hex[:12]  # fresh tag → fresh exit IP
            auth = f"{self.avp_key}-country-fr-session-{tag}:{self.avp_secret}"
            yield scrapy.Request(
                url=url,
                meta={'proxy': f"http://{auth}@api.atlasvpn.live:7777"},
            )

Debugging: log the proxy per request

class LoggingMiddleware:
    def process_request(self, request, spider):
        spider.logger.debug(f"via proxy: {request.meta.get('proxy', 'direct')}")
        return None

# settings.py
DOWNLOADER_MIDDLEWARES = {
    'myproject.middlewares.LoggingMiddleware': 100,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
}