ABCProxy Docs
English
English
  • Overview
  • PROXIES
    • Residential Proxies
      • Introduce
      • Dashboard to Get IP to Use
      • Getting started guide
      • Account security authentication
      • API extraction
      • Basic query
      • Select the country/region
      • Select State
      • Select city
      • Session retention
    • Socks5 Proxies
      • Getting Started
      • Proxy Manager to Get IP to Use
    • Unlimited Residential Proxies
      • Getting started guide
      • Account security authentication
      • API extraction
    • Static Residential Proxies
      • Getting started guide
      • API extraction
      • Account security authentication
    • ISP Proxies
      • Getting started guide
      • Account security authentication
    • Dedicated Datacenter Proxies
      • Getting started guide
      • API extraction
      • Account security authentication
  • Advanced proxy solutions
    • Web Unblocker
      • Get started
      • Making Requests
        • JavaScript rendering
        • Geo-location
        • Session
        • Header
        • Cookie
        • Blocking Resource Loading
    • APM-ABC Proxy Manger
      • How to use
  • SERP API
    • Get started
    • Google
      • Google Search API
      • Google Shopping API
      • Google Local API
      • Google Videos API
      • Google News API
      • Google Flights API
      • Google Product API
      • Google Images API
      • Google Lens Search API
      • Google Play Product API
      • Google Play Game Store API
      • Google Play Book Store API
      • Google Play Movies Store API
      • Google Jobs API
      • Google Scholar Author API
      • Google Scholar API
      • Google Scholar Cite API
      • Google Scholar Profiles API
    • Bing
      • Bing Search API
      • Bing News API
      • Bing Shopping API
      • Bing Images API
      • Bing Videos API
      • Bing Maps API
    • Yahoo
      • Yahoo! Search API
      • Yahoo! Shopping API
      • Yahoo! Images API
      • Yahoo! Videos API
    • DuckDuckGo
      • DuckDuckGo Search API
      • DuckDuckGo News API
      • DuckDuckGo Maps API
    • Ebay
      • Ebay Search API
    • Walmart
      • Walmart Search API
      • Walmart Product Reviews API
      • Walmart Product API
    • Yelp
      • Yelp Reviews API
    • Youtube
      • YouTube Search API
      • YouTube Video API
      • YouTube Video Batch Download API
        • YouTube Batch Download Task Information API
        • YouTube Single Download Job Information API
  • Parametric
    • Google Ads Transparency Center Regions
    • Google GL Parameter: Supported Google Countries
    • Google HL Parameter: Supported Google Languages
    • Google Lens Country Parameter: Supported Google Lens Countries
    • Google Local Services Job Types
    • Google Trends Categories
    • Supported DuckDuckGo Regions
    • Supported Ebay Domains
    • Supported Ebay location options
    • Google Trends Locations
    • Supported Ebay sort options
    • Supported Google Countries via cr parameter
    • Supported Google Domains
    • Supported Google Languages via lr parameter
    • Supported Google Play Apps Categories
    • Supported Google Patents country codes
    • Supported Google Play Games Categories
    • Supported Google Play Books Categories
    • Supported Google Play Movies Categories
    • Supported Google Scholar Courts
    • Supported Yahoo! Countries
    • Supported Yahoo! Domains
    • Supported Yahoo! File formats
    • Supported Yahoo! Languages
    • Supported Yandex Domains
    • Supported Yandex Languages
    • Supported Yelp Domains
    • Supported Yandex Locations
    • Supported Yelp Reviews Languages
    • Walmart Stores Locations
    • Supported Google Travel currency codes
    • Supported Locations API
  • Scraping Browser
    • Get started
  • HELP
    • FAQ
      • ABCProxy Software Can Not Log In?
      • Software Tip:“please start the proxy first”
    • Refund Policy
    • Contact Us
  • INTEGRATION AND USAGE
    • Browser Integration Tools
      • Proxy Switchy Omega
      • BP Proxy Switcher
      • Brave Browser
    • Anti-Detection Browser Integration
      • AdsPower
      • BitBrowser
      • Dolphin{anty}
      • Undetectable
      • Incogniton
      • Kameleo
      • Morelogin
      • ClonBrowser
      • Hidemium
      • Helium Scraper
      • VMlogin
      • ixBrower
      • Xlogin
      • Antbrowser
      • Lauth
      • Indigo
      • IDENTORY
      • Gologin
      • MuLogin
    • Use of Enterprise Plan
      • How to use the Enterprise Plan CDKEY?
Powered by GitBook
On this page
  • Introduce
  • Quick Start
  • Get Account Password
  • How to use
  1. Scraping Browser

Get started

PreviousSupported Locations APINextFAQ

Last updated 10 hours ago

Introduce

Crawl Browser is a tool to help you extract data from web pages more easily through automation tools or techniques.

With the credentials we provide, you can choose to use the Crawl Browser locally or on our We will open a fingerprinting browser for you with a proxy network to handle automation scripts as well as data collection for you through browser automation and unlockers.

Application Scenarios:

Widely used in data analysis, market monitoring and other fields. Compatible with mainstream framework technology language (such as Puppeteer, Selenium, playwright), suitable for e-commerce price monitoring, public opinion analysis, academic research and other scenarios.

Quick Start

Get Account Password

Visit , register an account, go to the Personal Centre, and get your account password through in Scraping Browser.

How to use

Request samples

Below, you'll find sample Python requests. For examples in other programming languages, please refer to the relevant sections:

import asyncio  
from playwright.async_api import async_playwright  
  
const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const SBR_WS_SERVER = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
async def run(pw):  
    print('Connecting to Scraping Browser...')  
    browser = await pw.chromium.connect_over_cdp(SBR_WS_SERVER)  
    try:  
        print('Connected! Navigating to Target...')  

        page = await browser.new_page()  
        await page.goto('https://example.com', timeout= 2 * 60 * 1000) 

        # Screenshot
        print('To Screenshot from page')  
        await page.screenshot(path='./remote_screenshot_page.png')  
        # html content
        print('Scraping page content...')  
        html = await page.content()  
        print(html)  
 
    finally:  
        # In order to better use the Scraping browser, be sure to close the browser 
        await browser.close()  
   
async def main():  
    async with async_playwright() as playwright:  
        await run(playwright)  
  
if _name_ == '_main_':  
 asyncio.run(main())
 
from selenium.webdriver import Remote, ChromeOptions  
from selenium.webdriver.chromium.remote_connection import ChromiumRemoteConnection  
from selenium.webdriver.common.by import By  

# Enter your credentials - the zone name and password  
AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD'  
REMOTE_WEBDRIVER = f'https://{AUTH}@hs-scbr.abcproxy.com'  
  
def main():  
    print('Connecting to Scraping Browser...')  
    sbr_connection = ChromiumRemoteConnection(REMOTE_WEBDRIVER, 'goog', 'chrome')  
    with Remote(sbr_connection, options=ChromeOptions()) as driver:  

        # get target URL
        print('Connected! Navigating to target ...')  
        driver.get('https://example.com') 

        # screenshot 
        print('screenshot to png')  
        driver.get_screenshot_as_file('./remote_page.png')  

        # html content
        print('Get page content...')  
        html = driver.page_source  
        print(html)  
  
if __name__ == '__main__':  
   main()
const puppeteer = require('puppeteer-core');  

const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const WS_ENDPOINT = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
(async () => {
    console.log('Connecting to Scraping Browser...');  
    const browser = await puppeteer.connect({  
        browserWSEndpoint: SBR_WS_ENDPOINT,
        defaultViewport: {width: 1920, height: 1080}  
   });  
    try {  
        console.log('Connected! Navigating to Target URL');  
        const page = await browser.newPage();  
        
        await page.goto('https://example.com', { timeout: 2 * 60 * 1000 });  

        //1.Screenshot
        console.log('Screenshot to page.png');  
        await page.screenshot({ path: 'remote_screenshot.png' }); 
        console.log('Screenshot be saved');  

        //2.Get content
        console.log('Get page content...');  
        const html = await page.content();  
        console.log("source Htmml: ", html)  

    } finally {  
        // In order to better use the Scraping browser, be sure to close the browser after the script is executed
        await browser.close();  
   }  
})();
const pw = require('playwright');


const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const SBR_CDP = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
async function main() {  
    console.log('Connecting to Scraping Browser...');  
    const browser = await pw.chromium.connectOverCDP(SBR_CDP);  
    try {  
        console.log('Connected! Navigating to target...');  
        const page = await browser.newPage();
        // Target URL
        await page.goto('https://www.windows.com', { timeout: 2 * 60 * 1000 });  
        // Screenshot
        console.log('To Screenshot from page');  
        await page.screenshot({ path: './remote_screenshot_page.png'});  

        // html content
        console.log('Scraping page content...');  
        const html = await page.content();  
        console.log(html);  
    } finally {  
        // In order to better use the Scraping browser, be sure to close the browser after the script is executed
        await browser.close();  
   }  
}  
  
if (require.main === module) {  
    main().catch(err => {  
        console.error(err.stack || err);  
        process.exit(1);  
   });  
}

Step 1:Selection of use cases for obtaining relevant scripts.

Step 2:Specify region (not required), select account number.

Step 3:

Run the script in the REQUEST area and wait for the processing to return the result in RESPONSE and CONSOLE to view the preview result and result log.The right side of the RESPONSE area can be switched to html and download the html file.

All information herein is provided on an “as is” basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on this page. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website’s terms of service or receive a scraping license.

Go to the ..

Scraping Browser usage page
page console.
ABC Proxy official website
Auth Account Management
Get account password
Scraping Browser usage page
Scraping Browser usage page
Scraping Browser usage page