ABCProxy Docs
English
English
  • Overview
  • PROXIES
    • Residential Proxies
      • Introduce
      • Dashboard to Get IP to Use
      • Getting started guide
      • Account security authentication
      • API extraction
      • Basic query
      • Select the country/region
      • Select State
      • Select city
      • Session retention
    • Socks5 Proxies
      • Getting Started
      • Proxy Manager to Get IP to Use
    • Unlimited Residential Proxies
      • Getting started guide
      • Account security authentication
      • API extraction
    • Static Residential Proxies
      • Getting started guide
      • API extraction
      • Account security authentication
    • ISP Proxies
      • Getting started guide
      • Account security authentication
    • Dedicated Datacenter Proxies
      • Getting started guide
      • API extraction
      • Account security authentication
  • Advanced proxy solutions
    • Web Unblocker
      • Get started
      • Making Requests
        • JavaScript rendering
        • Geo-location
        • Session
        • Header
        • Cookie
        • Blocking Resource Loading
    • APM-ABC Proxy Manger
      • How to use
  • SERP API
    • Get started
    • Google
      • Google Search API
      • Google Shopping API
      • Google Local API
      • Google Videos API
      • Google News API
      • Google Flights API
      • Google Product API
      • Google Images API
      • Google Lens Search API
      • Google Play Product API
      • Google Play Game Store API
      • Google Play Book Store API
      • Google Play Movies Store API
      • Google Jobs API
      • Google Scholar Author API
      • Google Scholar API
      • Google Scholar Cite API
      • Google Scholar Profiles API
    • Bing
      • Bing Search API
      • Bing News API
      • Bing Shopping API
      • Bing Images API
      • Bing Videos API
      • Bing Maps API
    • Yahoo
      • Yahoo! Search API
      • Yahoo! Shopping API
      • Yahoo! Images API
      • Yahoo! Videos API
    • DuckDuckGo
      • DuckDuckGo Search API
      • DuckDuckGo News API
      • DuckDuckGo Maps API
    • Ebay
      • Ebay Search API
    • Walmart
      • Walmart Search API
      • Walmart Product Reviews API
      • Walmart Product API
    • Yelp
      • Yelp Reviews API
    • Youtube
      • YouTube Search API
      • YouTube Video API
      • YouTube Video Batch Download API
        • YouTube Batch Download Task Information API
        • YouTube Single Download Job Information API
  • Parametric
    • Google Ads Transparency Center Regions
    • Google GL Parameter: Supported Google Countries
    • Google HL Parameter: Supported Google Languages
    • Google Lens Country Parameter: Supported Google Lens Countries
    • Google Local Services Job Types
    • Google Trends Categories
    • Supported DuckDuckGo Regions
    • Supported Ebay Domains
    • Supported Ebay location options
    • Google Trends Locations
    • Supported Ebay sort options
    • Supported Google Countries via cr parameter
    • Supported Google Domains
    • Supported Google Languages via lr parameter
    • Supported Google Play Apps Categories
    • Supported Google Patents country codes
    • Supported Google Play Games Categories
    • Supported Google Play Books Categories
    • Supported Google Play Movies Categories
    • Supported Google Scholar Courts
    • Supported Yahoo! Countries
    • Supported Yahoo! Domains
    • Supported Yahoo! File formats
    • Supported Yahoo! Languages
    • Supported Yandex Domains
    • Supported Yandex Languages
    • Supported Yelp Domains
    • Supported Yandex Locations
    • Supported Yelp Reviews Languages
    • Walmart Stores Locations
    • Supported Google Travel currency codes
    • Supported Locations API
  • Scraping Browser
    • Get started
    • Configure Scraping browser
    • Standard Functions
  • HELP
    • FAQ
      • ABCProxy Software Can Not Log In?
      • Software Tip:“please start the proxy first”
    • Refund Policy
    • Contact Us
  • INTEGRATION AND USAGE
    • Browser Integration Tools
      • Proxy Switchy Omega
      • BP Proxy Switcher
      • Brave Browser
    • Anti-Detection Browser Integration
      • AdsPower
      • BitBrowser
      • Dolphin{anty}
      • Undetectable
      • Incogniton
      • Kameleo
      • Morelogin
      • ClonBrowser
      • Hidemium
      • Helium Scraper
      • VMlogin
      • ixBrower
      • Xlogin
      • Antbrowser
      • Lauth
      • Indigo
      • IDENTORY
      • Gologin
      • MuLogin
    • Use of Enterprise Plan
      • How to use the Enterprise Plan CDKEY?
Powered by GitBook
On this page
  • How to configure Scraping browser
  • Sample Code
  • Scraping Browser Initial Navigation and Workflow Management
  • Time limit of Session
  1. Scraping Browser

Configure Scraping browser

How to configure Scraping browser

Before using the Scraping Browser, some configurations are needed. This article will guide you through setting up your credentials configuration, Scraping Browser, running sample scripts, and real-time browser sessions on the Page Operations Console. Follow our detailed instructions to ensure the efficient use of the Scraping Browser for web scraping.

To start using Scraping Browser, please first obtain your credentials - the username and password you will use for the web automation tool. We assume that you have already installed your preferred web automation tool. If not, please install it.

Sample Code

We have provided some scraping examples to help you get started with our Scraping Browser more efficiently. You simply need to replace your credentials and target URL, then customize the scripts according to your business scenarios.

To run scripts in your local environment, you can refer to the following examples. Ensure you have installed the required dependencies locally, don’t forget to configure your credentials, and execute your scripts to obtain the desired data.

If the webpage you are accessing might encounter CAPTCHAs or verification challenges, don’t worry—we’ll handle them for you seamlessly.

import asyncio  
from playwright.async_api import async_playwright  
  
const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const SBR_WS_SERVER = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
async def run(pw):  
    print('Connecting to Scraping Browser...')  
    browser = await pw.chromium.connect_over_cdp(SBR_WS_SERVER)  
    try:  
        print('Connected! Navigating to Target...')  

        page = await browser.new_page()  
        await page.goto('https://example.com', timeout= 2 * 60 * 1000) 

        # Screenshot
        print('To Screenshot from page')  
        await page.screenshot(path='./remote_screenshot_page.png')  
        # html content
        print('Scraping page content...')  
        html = await page.content()  
        print(html)  
 
    finally:  
        # In order to better use the Scraping browser, be sure to close the browser 
        await browser.close()  
   
async def main():  
    async with async_playwright() as playwright:  
        await run(playwright)  
  
if _name_ == '_main_':  
 asyncio.run(main())
 
from selenium.webdriver import Remote, ChromeOptions  
from selenium.webdriver.chromium.remote_connection import ChromiumRemoteConnection  
from selenium.webdriver.common.by import By  

# Enter your credentials - the zone name and password  
AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD'  
REMOTE_WEBDRIVER = f'https://{AUTH}@hs-scbr.abcproxy.com'  
  
def main():  
    print('Connecting to Scraping Browser...')  
    sbr_connection = ChromiumRemoteConnection(REMOTE_WEBDRIVER, 'goog', 'chrome')  
    with Remote(sbr_connection, options=ChromeOptions()) as driver:  

        # get target URL
        print('Connected! Navigating to target ...')  
        driver.get('https://example.com') 

        # screenshot 
        print('screenshot to png')  
        driver.get_screenshot_as_file('./remote_page.png')  

        # html content
        print('Get page content...')  
        html = driver.page_source  
        print(html)  
  
if __name__ == '__main__':  
   main()
const puppeteer = require('puppeteer-core');  

const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const WS_ENDPOINT = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
(async () => {
    console.log('Connecting to Scraping Browser...');  
    const browser = await puppeteer.connect({  
        browserWSEndpoint: SBR_WS_ENDPOINT,
        defaultViewport: {width: 1920, height: 1080}  
   });  
    try {  
        console.log('Connected! Navigating to Target URL');  
        const page = await browser.newPage();  
        
        await page.goto('https://example.com', { timeout: 2 * 60 * 1000 });  

        //1.Screenshot
        console.log('Screenshot to page.png');  
        await page.screenshot({ path: 'remote_screenshot.png' }); 
        console.log('Screenshot be saved');  

        //2.Get content
        console.log('Get page content...');  
        const html = await page.content();  
        console.log("source Htmml: ", html)  

    } finally {  
        // In order to better use the Scraping browser, be sure to close the browser after the script is executed
        await browser.close();  
   }  
})();
const pw = require('playwright');


const AUTH = 'PROXY-FULL-ACCOUNT:PASSWORD';  
const SBR_CDP = `wss://${AUTH}@upg-scbr.abcproxy.com`;  
  
async function main() {  
    console.log('Connecting to Scraping Browser...');  
    const browser = await pw.chromium.connectOverCDP(SBR_CDP);  
    try {  
        console.log('Connected! Navigating to target...');  
        const page = await browser.newPage();
        // Target URL
        await page.goto('https://www.windows.com', { timeout: 2 * 60 * 1000 });  
        // Screenshot
        console.log('To Screenshot from page');  
        await page.screenshot({ path: './remote_screenshot_page.png'});  

        // html content
        console.log('Scraping page content...');  
        const html = await page.content();  
        console.log(html);  
    } finally {  
        // In order to better use the Scraping browser, be sure to close the browser after the script is executed
        await browser.close();  
   }  
}  
  
if (require.main === module) {  
    main().catch(err => {  
        console.error(err.stack || err);  
        process.exit(1);  
   });  
}

Scraping Browser Initial Navigation and Workflow Management

The scraping browser session architecture allows each session to perform only one initial navigation. This initial navigation refers to the first instance of loading the target website that will be used for subsequent data extraction. After this initial phase, users can freely navigate the website within the same session through clicks, scrolls, and other interactive actions. However, to start a new scraping job from the initial navigation phase—whether targeting the same site or a different one—a new session must be created.

Time limit of Session

1.Regardless of your operational method, note that session timeout limits apply. If a browser session is not explicitly closed in your script, the system will automatically terminate it after a maximum of 60 minutes.

2.When using the Scraping Browser via the web console, the system enforces a strict one active session per account rule. To ensure optimal performance and experience, always explicitly close the browser session in your script.

PreviousGet startedNextStandard Functions

Last updated 3 days ago