Books
28.09.2020, last updated 24.11.2020 - Jay M. Patel - Reading time ~3 Minutes
Disclosure: As an Amazon Associate I earn from qualifying purchases
Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale
ISBN-10: 1484265750
ISBN-13: 978-1484265758
Paperback: Nov 2020
About the Book
- Shows you how to process web crawls from Common Crawl, one of the largest publicly available web crawl datasets (petabyte scale) indexing over 25 billion web pages ever month.
- Takes you from developing a simple Python-based web scraper on your personal computer to a distributed crawler with multiple nodes running on the cloud.
- Teaches you to process raw data using NLP techniques and boilerplate removal to extract useful insights that can power businesses with vertical/meta search engines, lead generation and Internet marketing, monitoring of competitors, brands, and prices, and more.
Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice. This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS’s registry of open data. Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas.
Table of contents
Introduction to web scraping: Why is web scraping essential and who uses web scraping?
- Getting data from Reddit APIs
- Getting stock market data from Alphavantage
Web scraping in python using Beautiful Soup library
- Tags and structure of HTML documents
- Cascading style sheets (CSS)
- Building first scraper with Beautiful Soup
- Scraping a HTML table into pandas dataframe
- Scraping XML files from FDA.gov
- XPATH and xlml
- Intro to Javasctript and using Selenium for web scraping
- Tags and structure of HTML documents
Introduction to Cloud Computing and Amazon Web Services (AWS)
Natural Language Processing (NLP) and Text Analytics
Relational Databases and SQL Language
Introduction to Common Crawl Datasets
Web Crawl Processing on Big Data Scale
Advanced Web Crawlers
- Scrapy
- Solving captchas
- Proxy IP and user-agent rotation
Green Chemistry Education: Recent Developments
ISBN 978-3-11-056649-9 (PDF)
ISBN 978-3-11-056588-1 (EPUB)
ISBN 978-3-11-056578-2 (Hardcover)
Hardcover: 219 pages
de Gruyter (December 17, 2018)
Buy on Amazon.com | Table of contents
About the Book
This book aims to cover recent advances in green chemistry, including application of cheminformatics, quantitative structure-activity relationships (QSARs) and statistical approaches to model chemical reactivity. With my co-authors, I contributed a chapter on using machine learning and knowledge based systems to predict environmental degradation of organic chemicals which are currently being used by US Environmental Protection Agency:
- Mills T., Patel J.M., Stevens C.T. (2018). The environmental fate of synthetic organic chemicals.