Enable Dark Mode!
how-to-use-python-to-scrape-data-from-website-and-save-it-to-excel.jpg
By: Ranjith R

How to Use Python to Scrape Data From Website & Save It to Excel

Technical

Web scraping is a technique used to extract information from websites and transform it into a structured format for analysis or other uses. With Python, you can efficiently scrape and save data into Excel files using libraries like BeautifulSoup, Selenium, Pyppeteer, and requests. In this blog, we'll explore how to use these libraries to scrape data and save it into Excel, and dive into the advanced features of Selenium and Pyppeteer for more complex scraping tasks.

Ensure you have Python installed along with the following libraries:

* Requests: This library helps you easily send and receive data from websites. You can use it to fetch web pages or send information to a server.

* BeautifulSoup (from bs4): This tool takes the HTML from web pages and makes it easier to navigate and extract useful information, like specific text or links.

* OpenPyXL: This library lets you work with Excel files. You can read data from spreadsheets, write new data, and format cells, making it useful for managing and analyzing data.

* Selenium: This library automates browser interactions, allowing you to control a web browser to navigate websites, interact with elements, and extract dynamic content.

* Pyppeteer: This library automates headless Chrome/Chromium, providing a way to control a web browser in a headless mode for scraping and automating complex interactions.

1. Retrieving Website Data Using Requests and BeautifulSoup

* Step 1: Setting Up the Environment

Start by importing the necessary libraries:

from bs4 import BeautifulSoup
import requests
from openpyxl import Workbook

* Step 2: Sending a Request and Getting the HTML Content

url = "https://www.example.com"
headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get(url, headers=headers)
response.raise_for_status()
html_content = response.text

1. Setting the URL

url = "https://www.example.com"

* What it does: Defines the web address of the page you want to scrape. This is the target from which you want to extract data.

* Example: If you're looking to scrape a blog post, you would set the URL to the blog's URL.

2. Setting Headers

headers = {'User-Agent': 'Mozilla/5.0'}

* What it does: Defines the headers to be sent along with the request, particularly the User-Agent string.

* User-Agent: This string helps mimic the request as if it's coming from a browser, rather than a script, which can help avoid being blocked by some websites.

* Why it's important: Some websites block requests that don't look like they're coming from real browsers, so setting a User-Agent can help your request get through.

3. Sending the GET Request

response = requests.get(url, headers=headers)

* What it does: Sends an HTTP GET request to the specified URL using the requests.get() function, with the headers included.

* Why use GET: The GET method is used to request data from the specified resource, which in this case is the HTML content of the web page.

4. Handling Potential Errors

response.raise_for_status()

* What it does: Checks if the request was successful. If the server returns a status code indicating an error (like 404 or 500), this line will raise an HTTPError exception.

* Why it's important: Ensures that your code stops running and notifies you if there's an issue with the request, preventing further execution with invalid data.

5. Retrieving the HTML Content

html_content = response.text

* What it does: Extracts the HTML content from the response object using the .text attribute.

* Why use .text: This attribute returns the content of the response in Unicode, which is useful for working with HTML data that needs to be parsed or processed further.

Step 3: Parsing the HTML with BeautifulSoup

Parse the HTML content using BeautifulSoup:

soup = BeautifulSoup(html_content, 'html.parser')

Explanation:

* BeautifulSoup(html_content, 'html.parser'): Parses the HTML content using the built-in HTML parser. This allows BeautifulSoup to navigate and search through the HTML document.

Step 4: Navigating and Extracting Data

BeautifulSoup provides powerful methods for extracting and manipulating data:

1. Extracting Text

Example:

text = soup.find('p').string

Explanation:

* .find('p'): Finds the first <p> tag in the HTML content.

* .string: Retrieves the text within the <p> tag. This method is useful for extracting simple text content directly.

Note: If the tag contains multiple nested tags, .string will only return the text of the first level. Use .get_text() for more complete text extraction.

2. Modifying HTML Attributes

Example:

tag = soup.find('a')
tag['href'] = 'https://www.new-url.com'

Explanation:

* .find('a'): Finds the first <a> tag in the HTML content.

* tag['href']: Accesses the href attribute of the <a> tag.

* = 'https://www.new-url.com': Sets a new value for the href attribute, allowing you to dynamically modify the HTML.

3. Extracting Image Sources

Example:

img_src = soup.find('img').get('src')

Explanation:

* .find('img'): Finds the first <img> tag in the HTML content.

* .get('src'): Retrieves the value of the src attribute, which is the URL of the image.

4. Navigating to Sibling Tags

Example:

next_sibling = soup.find('h2').find_next_sibling()
previous_sibling = soup.find('h2').find_previous_sibling()

Explanation:

* .find('h2'): Finds the first <h2> tag in the HTML content.

* .find_next_sibling(): Retrieves the next sibling tag of the <h2> tag.

* .find_previous_sibling(): Retrieves the previous sibling tag of the <h2> tag. These methods are useful for navigating through elements at the same level in the HTML structure.

5. Extracting and Modifying Attributes

Example:

attributes = soup.find('a').attrs

Explanation:

* .attrs: Gets a dictionary of all attributes for a tag. This allows you to inspect or modify all attributes of an HTML element.

2. Retrieving Website Data Using Selenium

Selenium is a powerful tool that allows you to automate interactions with web browsers. This means you can control a browser to perform tasks like clicking buttons, filling out forms, and navigating through pages, just as if you were doing it manually. 

Step 1: Setting Up the Environment

from selenium import webdriver 

Start by installing and importing the necessary libraries:

Step 2: Configuring the WebDriver

driver = webdriver.Chrome()

Step 3: Navigating to a Web Page

url = "https://www.example.com/login" 
driver.get(url)
content = driver.page_source 

From this content, we can extract data using BeautifulSoup as explained earlier.

Step 4: Finding and Interacting with Web Elements

1. Finding an Element: .find_element

There are multiple options to find elements by name, class, id, etc.

What it does:

The .find_element(By.ID) method locates a web element using its id attribute.

Example:

element = driver.find_element(By.ID, 'loginButton')

Explanation:

* This line finds the HTML element with the id attribute set to 'loginButton'.

* It returns a WebElement object, which you can then interact with (e.g., clicking or sending text to it).

2. Sending Input to a Field: .send_keys()

What it does:

The .send_keys() method is used to simulate typing into a text field or any element that accepts input.

Example:

username_field = driver.find_element(By.ID, 'username')
username_field.send_keys('my_username')

Explanation:

* First, the code finds the element with the id 'username'.

* Then, it uses .send_keys('my_username') to input the text 'my username' into the field.

3. Clicking a Button: click()

What it does:

The click() method is used to simulate a mouse click on a web element, such as a button or link.

Example:

login_button = driver.find_element(By.ID, 'loginButton')
login_button.click()

Explanation:

* After locating the login button using its ID, the click() method clicks it, triggering whatever action the button is programmed to do (e.g., submitting a form).

4. Taking a Screenshot: save_screenshot()

What it does:

The save_screenshot() method captures the current state of the browser window and saves it as an image file.

Example:

driver.save_screenshot('screenshot.png')

Explanation:

* This command takes a screenshot of the browser window and saves it as 'screenshot.png' in the current working directory.

* It’s useful for debugging or saving evidence of a test's result.

3. Retrieving Website Data Using Pyppeteer

1. Setting Up the Environment

Start by installing and importing the necessary libraries:

from pyppeteer import launch

2. Launching the Browser

To interact with a web page, first, you need to launch a browser:

browser = await launch()

Explanation: This line of code launches a new instance of a browser (by default, Chromium). The await keyword is used because Pyppeteer operates asynchronously.

3. Opening a New Page

After launching the browser, open a new page (similar to a new tab):

page = await browser.newPage()

Explanation: This command creates a new page (tab) within the browser, which you can then navigate to a URL.

4. Navigating to a Web Page

To visit a specific webpage, use:

await page.goto('https://www.example.com/login')

Explanation: The goto() function navigates the browser to the URL provided. It waits until the page is fully loaded before proceeding.

5. Finding and Interacting with Web Elements

a. Finding an Element: querySelector()

To find an element on the page, you can use the querySelector() method:

element = await page.querySelector('#loginButton')
Explanation: This finds the first HTML element with the id attribute set to 'loginButton'. It's similar to document.querySelector in JavaScript.

b. Sending Input to a Field: type()

To simulate typing into an input field:
await page.type('#username', 'my_username')
Explanation: The type() method sends keystrokes to the element with the id 'username'. This effectively enters 'my_username' into the field.

c. Clicking a Button: click()

To simulate a click on a button or link:
await element.click()

Explanation: After locating the button element (e.g., 'loginButton'), click() triggers the click action, just like a mouse click.

6. Taking a Screenshot: screenshot()

To capture the current state of the page as an image:

await page.screenshot({'path': 'screenshot.png'})

Explanation: This saves a screenshot of the page to 'screenshot.png' in the current working directory. It's useful for debugging or record-keeping.

7. Closing the Browser

After completing your tasks, it's good practice to close the browser:

await browser.close()

Explanation: This closes the browser and frees up resources.

4. Storing Data in Excel

1. Creating a New Excel Workbook

wb = Workbook()

* What it does: Creates a new Excel workbook. In Excel, a workbook is the file itself, which can contain multiple sheets.

* Why use it: You need to create a workbook to store the data you’ve scraped.

2. Accessing the Active Sheet

ws = wb.active

* What it does: Gets the default active worksheet in the workbook. When a new workbook is created, it comes with one sheet that is already active.

* Why use it: This provides a sheet to which you can start adding data immediately.

3. Setting the Worksheet Title

ws.title = "Scraped Data"

* What it does: Renames the active worksheet to "Scraped Data". The default name is usually "Sheet" or "Sheet1".

* Why use it: Giving a meaningful name to your worksheet makes it easier to identify and organize your data.

4. Writing Headers

ws.append(["Product Name", "Price", "Details"])

* What it does: Adds a row to the worksheet containing the column headers, like "Product Name", "Price", and "Details".

* Why use it: Headers help label your data, making it easier to understand when you view or analyze the Excel file later.

5. Appending Data Rows

for product in product_items:
    name = product.find('h2').get_text(strip=True)
    price = product.find('span', class_='price').get_text(strip=True)
    details = product.find('div', class_='details').get_text(strip=True)
    ws.append([name, price, details])

1. What it does:

* Iterates over each item in product_items, which is a list of product elements extracted from the web page.

* Extracts the product name, price, and details from each product element using BeautifulSoup methods like find() and get_text().

* Appends each product’s data as a new row in the worksheet.

2. Why use it: This step transfers your scraped data into the Excel sheet, structuring it in a tabular format.

6. Saving the Workbook

wb.save("scraped_data.xlsx")

* What it does: Saves the entire workbook to a file named scraped_data.xlsx.

* Why use it: This final step writes the data to an actual Excel file on your disk, making it accessible for future use or analysis.

Conclusion

In this blog, we covered how to use Python to scrape data from websites and save it to an Excel file. We explored using requests and BeautifulSoup for static content, Selenium for dynamic interactions, and Pyppeteer for advanced tasks.

By following these steps, you can automate data collection and store the information in a structured Excel format, making it easier to analyze and use. Whether you're new to web scraping or looking to enhance your skills, these Python tools offer a straightforward way to get the job done.

To read more about What are the Key Features of the Natural Language Toolkit (NLTK) in Python, refer to our blog What are the Key Features of the Natural Language Toolkit (NLTK) in Python.


If you need any assistance in odoo, we are online, please chat with us.



0
Comments



Leave a comment



whatsapp
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, Kinfra Techno Park
Kakkancherry, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message