Sunday, 10 July 2016

Data Scraping – Will Definitely Benefit a Business Startup

With increasingly data shared using internet, the data collected as well as the usage cases are increasing with an unbelievable pace. We’ve entered into the “Big Data” age and data scraping is among the resources to supply big data engines, the latest data for analytical analytics, contest monitoring, or just to steal the data.

From the technology viewpoint, competent data scraping is fairly complicated. It has many open-source projects that allow anybody to run a web data scraper through him. Nevertheless it’s the entire different story while it needs to be an interior of the business as well as that you require not only maintaining your scrapers but also scaling them as well as extract the data smartly as you need.

That is the reason why different services are selling the “data scraping” as service. Their work is taking care about all the technical characteristics so that you can have the data required without any industrial knowledge. Fundamentally all these startups pay attention for collecting the data and then extract its value for selling it to the customers.

Let’s take some examples:

• Sales Intelligence – The scrapers monitor competitors, marketplaces, online directories, and data from the public markets to discover leads. For instance, some tool’s track websites that drop or add JavaScript tags from the competitors therefore you can call them as eligible leads.
• Price Intelligence – A very ordinary use is the price monitoring. If this is in with e-commerce, travel, or property industry monitoring competitors’ prices as well as adjusting yours consequently is generally the key. All these services monitor the prices and using the analytical algorithms they may provide you advice about where the puck can be.
• Marketing – Data scraping may also be used for monitoring how the competitors are doing. From the reviews they have on the marketplaces to get coverage as well as financially published data one can find out a lot. Concerned about marketing, there is a development hacking class which teaches how to use scraping for the marketing objectives.

Finance intelligence, economic intelligence, etc have more and more financial, political, and economical data accessible online with the newer type of services that collect and add up of that, are increasing.

Let’s go through some points concerned with the market:

• It’s tough to evaluate how huge the data scraping market is as this is with the intersection of many big industries like sales, IT security, finance and marketing intelligence. This method is certainly a small part of all these industries however is expected to increase in the coming years.
• It’s a secured bet to indicate that increasingly SaaS will get pioneering applications for the web data scraping as well as progressively startups will use data scraping services from the safety viewpoint.
• As all the startups are generally entering huge markets using niche products / approaches (web data scraping isn’t a solution of everything, it’s more like a feature) they are expected to be obtained by superior players (within the safety, sales, or marketing tools industries). The technological barriers are also there.

Source URL : http://www.3idatascraping.com/data-scraping-will-definitely-benefit-a-business-startup.php

Friday, 8 July 2016

Web Scraping Services : Making Modern File Formats More Accessible

Data scraping is the process of automatically sorting through information contained on the internet inside html, PDF or other documents and collecting relevant information to into databases and spreadsheets for later retrieval. On most websites, the text is easily and accessibly written in the source code but an increasing number of businesses are using Adobe PDF format (Portable Document Format: A format which can be viewed by the free Adobe Acrobat software on almost any operating system. See below for a link.). The advantage of PDF format is that the document looks exactly the same no matter which computer you view it from making it ideal for business forms, specification sheets, etc.; the disadvantage is that the text is converted into an image from which you often cannot easily copy and paste. PDF Scraping is the process of data scraping information contained in PDF files. To PDF scrape a PDF document, you must employ a more diverse set of tools.

There are two main types of PDF files: those built from a text file and those built from an image (likely scanned in). Adobe's own software is capable of PDF scraping from text-based PDF files but special tools are needed for PDF scraping text from image-based PDF files. The primary tool for PDF scraping is the OCR program. OCR, or Optical Character Recognition, programs scan a document for small pictures that they can separate into letters. These pictures are then compared to actual letters and if matches are found, the letters are copied into a file. OCR programs can perform PDF scraping of image-based PDF files quite accurately but they are not perfect.

Once the OCR program or Adobe program has finished PDF scraping a document, you can search through the data to find the parts you are most interested in. This information can then be stored into your favorite database or spreadsheet program. Some PDF scraping programs can sort the data into databases and/or spreadsheets automatically making your job that much easier.

Quite often you will not find a PDF scraping program that will obtain exactly the data you want without customization. Surprisingly a search on Google only turned up one business, that will create a customized PDF scraping utility for your project. A handful of off the shelf utilities claim to be customizable, but seem to require a bit of programming knowledge and time commitment to use effectively. Obtaining the data yourself with one of these tools may be possible but will likely prove quite tedious and time consuming. It may be advisable to contract a company that specializes in PDF scraping to do it for you quickly and professionally.

Let's explore some real world examples of the uses of PDF scraping technology. A group at Cornell University wanted to improve a database of technical documents in PDF format by taking the old PDF file where the links and references were just images of text and changing the links and references into working clickable links thus making the database easy to navigate and cross-reference. They employed a PDF scraping utility to deconstruct the PDF files and figure out where the links were. They then could create a simple script to re-create the PDF files with working links replacing the old text image.

A computer hardware vendor wanted to display specifications data for his hardware on his website. He hired a company to perform PDF scraping of the hardware documentation on the manufacturers' website and save the PDF scraped data into a database he could use to update his webpage automatically.

PDF Scraping is just collecting information that is available on the public internet. PDF Scraping does not violate copyright laws.

PDF Scraping is a great new technology that can significantly reduce your workload if it involves retrieving information from PDF files. Applications exist that can help you with smaller, easier PDF Scraping projects but companies exist that will create custom applications for larger or more intricate PDF Scraping jobs.

Source URL :  http://yellowpagesdatascraping.blogspot.in/2015/06/web-scraping-services-making-modern.html

Wednesday, 29 June 2016

Scraping the Bottom of the Barrel - The Perils of Online Article Marketing

Many online article marketers so desperately wish to succeed, they want to dump corporate life and work for themselves out of their home. They decide they are going to create an online money making website. Therefore, they look around to see what everyone else is doing, and watch the methods others use to attract online buyers, and then they mimic their marketing, their strategies, and their business models.

Still, if you are copying what other people (less ethical people) are doing in online article marketing, those which are scraping the bottom of the barrel and using false advertising and misrepresentations, then all you are really doing is perpetuating distrust on the Internet. Therefore, you are hurting everyone, including people like me. You must realize that people like me don't appreciate that.

Let me give you a few examples of some of the things going on out there, thing that are being done by people who are ethically challenged. Far too many people write articles and then on their byline they send the Internet surfer or reader of the article to a website that has a squeeze page. The squeeze page has no real information on it, rather it asks for their name and e-mail address.

If the would-be Internet surfer is unwise enough to type in their name and email address they will be spammed by e-mail, receiving various hard-sell marketing pieces. Then, if the Internet Surfer does decide to put in their e-mail address, the website grants them access and then takes them to the page with information about what they are selling, or their online marketing "make you a millionaire" scheme.

Generally, these are five page sales letters, with tons of testimonials of people you've never heard of, and may not actually exist, and all sorts of unsubstantiated earnings claims of how much money you will make if you give them $39.35 by way of PayPal, for this limited offer "Now!" And they will send you an E-book with a strategic plan of how you can duplicate what they are doing. The reality is whatever they are doing is questionable to begin with.

Source URL  : http://ezinearticles.com/?Scraping-the-Bottom-of-the-Barrel---The-Perils-of-Online-Article-Marketing&id=2710103

Thursday, 12 May 2016

Beginner’s guide to Web Scraping in Python (using Beautiful Soup)

Introduction

The need and importance of extracting data from the web is becoming increasingly loud and clear. Every few weeks, I find myself in a situation where we need to extract data from the web. For example, last week we were thinking of creating an index of hotness and sentiment about various data science courses available on the internet. This would not only require finding out new courses, but also scrape the web for their reviews and then summarizing them in a few metrics! This is one of the problems / products, whose efficacy depends more on web scrapping and information extraction (data collection) than the techniques used to summarize the data.

Ways to extract information from web

There are several ways to extract information from the web. Use of APIs being probably the best way to extract data from a website. Almost all large websites like Twitter, Facebook, Google, Twitter, StackOverflow provide APIs to access their data in a more structured manner. If you can get what you need through an API, it is almost always preferred approach over web scrapping. This is because if you are getting access to structured data from the provider, why would you want to create an engine to extract the same information.

Sadly, not all websites provide an API. Some do it because they do not want the readers to extract huge information in structured way, while others don’t provide APIs due to lack of technical knowledge. What do you do in these cases? Well, we need to scrape the website to fetch the information.

There might be a few other ways like RSS feeds, but they are limited in their use and hence I am not including them in the discussion here.

What is Web Scraping?

Web scraping is a computer software technique of extracting information from websites. This technique mostly focuses on the transformation of unstructured data (HTML format) on the web into structured data (database or spreadsheet).

You can perform web scrapping in various ways, including use of Google Docs to almost every programming language. I would resort to Python because of its ease and rich eocsystem. It has a library known as ‘Beautiful Soup’ which assists this task. In this article, I’ll show you the easiest way to learn web scraping using python programming.

For those of you, who need a non-programming way to extract information out of web pages, you can also look at import.io . It provides a GUI driven interface to perform all basic web scraping operations. The hackers can continue to read this article!

Libraries required for web scraping

As we know, python is a open source programming language. You may find many libraries to perform one function. Hence, it is necessary to find the best to use library. I prefer Beautiful Soup (python library), since it is easy and intuitive to work on. Precisely, I’ll use two Python modules for scraping data:

Urllib2: It is a Python module which can be used for fetching URLs. It defines functions and classes to help with URL actions (basic and digest authentication, redirections, cookies, etc). For more detail refer to the documentation page.

Beautiful Soup: It is an incredible tool for pulling out information from a webpage. You can use it to extract tables, lists, paragraph and you can also put filters to extract information from web pages. In this article, we will use latest version Beautiful Soup 4. You can look at the installation instruction in its documentation page.

Beautiful Soup does not fetch the web page for us. That’s why, I use urllib2 in combination with the BeautifulSoup library.

Python has several other options for HTML scraping in addition to Beatiful Soup. Here are some others:

    -mechanize
    -scrapemark
    -scrapy

Basics – Get familiar with HTML (Tags)

While performing web scarping, we deal with html tags. Thus, we must have good understanding of them.                     
 you already know basics of HTML, you can skip this section. Below is the basic syntax of HTML:
  This syntax has various tags as elaborated below:

    <!DOCTYPE html> : HTML documents must start with a type declaration
      HTML document is contained between <html> and </html>
      The visible part of the HTML document is between <body> and </body>
       HTML headings are defined with the <h1> to <h6> tags
       HTML paragraphs are defined with the <

Scrapping a web Page using Beautiful Soup

Here, I am scraping data from a Wikipedia page. Our final goal is to extract list of state, union territory capitals in India. And some basic detail like establishment, former capital and others form this wikipedia page. Let’s learn with doing this project step wise step:

Import necessary libraries:

#import the library used to query a website
import urllib2
#specify the url
wiki = "https://en.wikipedia.org/wiki/List_of_state_and_union_territory_capitals_in_India"
#Query the website and return the html to the variable 'page'
page = urllib2.urlopen(wiki)
#import the Beautiful soup functions to parse the data returned from the website
from bs4 import Beautiful Soup
#Parse the html in the 'page' variable, and store it in Beautiful Soup format
soup = Beautiful Soup(page)

Use function “prettify” to look at nested structure of HTML page

Above, you can see that structure of the HTML tags. This will help you to know about different available tags and how can you play with these to extract information.

Work with HTML tags

    soup.<tag>: Return content between opening and closing tag including tag.
    In[30]:soup.title
    Out[30]:<title>List of state and union territory capitals in India - Wikipedia, the free encyclopedia</title>
    soup.<tag>.string: Return string within given tag
    In [38]:soup.title.string
    Out[38]:u'List of state and union territory capitals in India - Wikipedia, the free encyclopedia'

Find all the links within page’s <a> tags::  We know that, we can tag a link using tag “<a>”. So, we should go with option soup.a and it should return the links available in the web page. Let’s do it.

    In [40]:soup.a
    Out[40]:<a id="top"></a>

Above, you can see that, we have only one output. Now to extract all the links within <a>, we will use

Above, it is showing all links including titles, links and other information.  Now to show only links, we need to iterate over each a tag and then return the link using attribute “href” with get.

Find the right table: As we are seeking a table to extract information about state capitals, we should identify the right table first. Let’s write the command to extract information within all table tags.

all_tables=soup.find_all('table')

Now to identify the right table, we will use attribute “class” of table and use it to filter the right table. In chrome, you can check the class name by right click on the required table of web page –> Inspect element –> Copy the class name OR go through the output of above command find the class name of right table.

right_table=soup.find('table', class_='wikitable sortable plainrowheaders')

\right_table

Extract the information to DataFrame: Here, we need to iterate through each row (tr) and then assign each element of tr (td) to a variable and append it to a list. Let’s first look at the HTML structure of the table (I am not going to extract information for table heading <th>)
Above, you can notice that second element of <tr> is within tag <th> not <td> so we need to take care for this. Now to access value of each element, we will use “find(text=True)” option with each element.  Let’s look at the code

#Generate lists

A=[]
B=[]
C=[]
D=[]
E=[]
F=[]
G=[]
for row in right_table.findAll("tr"):

    cells = row.findAll('td')
    states=row.findAll('th') #To store second column data
    if len(cells)==6: #Only extract table body not heading
        A.append(cells[0].find(text=True))
        B.append(states[0].find(text=True))
        C.append(cells[1].find(text=True))
        D.append(cells[2].find(text=True))
        E.append(cells[3].find(text=True))
        F.append(cells[4].find(text=True))
        G.append(cells[5].find(text=True))

#import pandas to convert list to data frame

import pandas as pd
df=pd.DataFrame(A,columns=['Number'])
df['State/UT']=B
df['Admin_Capital']=C
df['Legislative_Capital']=D
df['Judiciary_Capital']=E
df['Year_Capital']=F
df['Former_Capital']=G
df

Similarly, you can perform various other types of web scraping using “Beautiful Soup“. This will reduce your manual efforts to collect data from web pages. You can also look at the other attributes like .parent, .contents, .descendants and .next_sibling, .prev_sibling and various attributes to navigate using tag name. These will help you to scrap the web pages effectively.-

But, why can’t I just use Regular Expressions?

Now, if you know regular expressions, you might be thinking that you can write code using regular expression which can do the same thing for you. I definitely had this question. In my experience with Beautiful Soup and Regular expressions to do same thing I found out:

Code written in Beautiful Soup is usually more robust than the one written using regular expressions. Codes written with regular expressions need to be altered with any changes in pages. Even Beautiful Soup needs that in some cases, it is just that Beautiful Soup is relatively better.

Regular expressions are much faster than Beautiful Soup, usually by a factor of 100 in giving the same outcome.

So, it boils down to speed vs. robustness of the code and there is no universal winner here. If the information you are looking for can be extracted with simple regex statements, you should go ahead and use them. For almost any complex work, I usually recommend BeautifulSoup more than regex.

End Note

In this article, we looked at web scraping methods using “Beautiful Soup” and “urllib2” in Python. We also looked at the basics of HTML and perform the web scraping step by step while solving a challenge. I’d recommend you to practice this and use it for collecting data from web pages.


 Source : http://www.analyticsvidhya.com/blog/2015/10/beginner-guide-web-scraping-beautiful-soup-python/

Friday, 29 April 2016

Customized Web Data Scraping Services

To understand your customers’ behaviour it is crucial to organize the scattered data into a single repository. There are experts today who can scrape websites to extract data and develop analytics. Data extraction is a major requisite for any small or large company that deals with a massive volume of information that is stored in a complex structure. Premium data mining services help in extracting and structuring data from structured as well as semi-structured documents found on the internet or in other data warehouses.

Companies dealing with a large amount of data on a regular basis may need to convert these set of data into useful information. In that case, web scraping services will come to help. The experts offering such services will ensure that none of the data is missed. Customized data extraction is carried out mostly on the customer databases in order to analyse their behaviour and demographic characteristics. Personalized services offer a whole lot of benefits, which are;

Ensure Data Quality

The experts use a custom data extractor in order to ensure that the data extracted are of high quality. More than forty percent of the websites change their structure every month. Thus, it can be difficult for you to monitor the websites. A customized data extraction service will allow you to concentrate on your business’ larger goals, instead of wasting your time in trying DIY web data extraction.

Availability of Custom Scraper Tool

A reputed web scraping service provider is expected to have custom scraper tool with which they can extract information from the data efficiently without missing on anyone. By using the tools they can even scrape the most complex data and can provide it in any format.

Avoid Possible Human Errors

While extracting so many data sometimes even the professional service providers can also miss out on data. However, with customized services there will be no possibility of human error. Besides, a lot of time and cost can be saved too.

Great Speed

The custom web scraping service provider with their efficient tools can work really fast to convert the large amount of data into analytics. Also, they are able to extract data from multiple resources. The extracted data will be further preserved into customized structured formats such as, Microsoft Database, Text, script, HTML, SQL script etc.

Update Website

Additionally, you will have the leverage to update the website with the latest price and filter search by skipping the data, which do not match the keyword.

Tailor-made services even allow the professionals to extract data from emails and some other communication channels efficiently. With these data you will be able to spot the essentials required to implement in your business to convert the visitors into your customers. Also, you can make your business marketing plans accordingly.

Custom website data scraping service assist companies to have access to various on-demand data that are scraped from web, depending on the individual needs. The experts offering end-to-end data extraction services can also help in preparing the analytics for your business.

 Source : http://www.web-parsing.com/blog/customized-web-data-scrapping-services

Wednesday, 27 April 2016

Data Extraction: Tips to Get Exemplary Results

Data extraction is a skill, the more you master it – more are the chances of having a lucid picture of the volatile market and getting better perceptive of constantly changing trends. Escalating volatility in the market and intensifying competition has been the most contributing factors that have led to the rise of data extraction and data mining.

Data extraction is primarily used by companies (large and small, alike) to collect data from a specific industry, or data related to targeted customers or about their competition in the market. In fact, it has become a primary tool for marketers to plan their moves for branding and promoting particular products or services. It helps a wide plethora of industrial sectors to find and learn about specific data, based on their requirements.

And now with the rise of internet, web scraping has emerged as an important aspect that contributes to your success – the success of your venture or organization. It processes the HTML of a Web page to obtain data and convert it into to another format (i.e. HTML to XML).

Various extraction tools form an integral part of data extraction and data scrapping. Following offers a brief outline of some of these tools:

Email Extraction – An email extractor tool is used to acquire the email ids from any dependable sources automatically

Screen Scrapping – Screen scraping is a practice of reading text information from a screen and collecting visual data, rather than analyzing data as done in web scraping.

Data Mining as name suggests is a process of gathering patterns from information. It basically transforms the information into formats like CSV, MS excels, HTML and so and so forth, depending to your requirements

Web Spider – A Web spider is a computer program which browses internet in a systematic, automated manner. It is used by many search engines in order to provide up-to-date data

It is often seen that while extracting data; many get lost into the labyrinth of confusion, data overabundance, along with a lot of weird and not-so-familiar terms. Proper handling of these may sound easy, however; when not executed with appropriate procedure and processes; it may bring in disastrous results.

This no way means that data mining is a rocket science which only a few gifted and skilled people can take up. All it requires is undivided attention, keen preparation, and training, so brace up yourself for an overview of some practical tips that can help in successful data extraction and give a boost to your business.

Identify your Business Goals!:

Get a clear perspective in mind as to what are your business goals.

Data extraction can be bifurcated into various branches; and one needs to choose it wisely, depending on the business goals. E.g. your primary requirement is to get email ids of potential clients to conduct an email campaign; and for that you certainly need an email extractor. Use of this tool assists in extracting the email ids from trustworthy sources automatically. It essentially collects business contacts from various web pages, text files, HTML files, or any other format without duplicating the email ids. So, if you are not sure what you want; even applying the best tools will be of no use!

A crystal clear mindset helps in better understanding of market scenario and thus helps in formulation of powerful and effective strategies to get desired outcomes. E.g., people dealing in real estate business, should have a vision for it and which area they want to target specifically. With a clear vision they can clearly spell out what you want and where it should be.

Set Realistic Expectations:

Upon identifying your business goals, make sure to check out that they are realistic and attainable! Unrealistic and unachievable targets are the real cause for the obstacles and frustrations in the future.

Since, there are various tools that are and can be employed to extract data; vague or unclear goals make it difficult to determine which tool can be applied.

This crystal clear mindset; will help you give that insight about the direction your business is headed to.

Moreover, you can determine which method can be used to get excellent results. You can get a lucid picture of the past and present of your competitors and therefore helps in setting targets based on the others’ experiences. It is usually a wise move to set expectations that you have not achieved before.

Appoint Skilled Data Miner:

Skilled data miner with excellent data mining skills will reduce the painstaking and tiresome process of planning, devising and preparation.

For fresh start-ups, you can go ahead with the standard procedure however; if you have ample professionals at your disposal, pick up the right one who is not only knowledgeable but also reliable and sincere towards the task.

Prevent Data Deposits:

Being dead-sure of what you really want will help you avoid unnecessary data deposition.

Data mining just like real mining is a skill to know where the real treasure lies and being able to get it in the most efficient and effective way.

Being able to spot on authenticated & reliable resources, well researched information is what gives a short cut to locate the right and exact data.

If you are aimlessly opening every website; the results are bound to be ambiguous and would ultimately be a waste of time and effort.


Source:  http://www.habiledata.com/blog/data-extraction-is-not-a-rocket-science-follow-these-4-tips-to-get-exemplary-results

Saturday, 11 July 2015

Scraping data from a list of web pages using Google Docs

Quite often when you’re looking for data as part of a story, that data will not be on a single page, but on a series of pages. To manually copy the data from each one – or even scrape the data individually – would take time. Here I explain a way to use Google Docs to grab the data for you.

Some basic principles

Although Google Docs is a pretty clumsy tool to use to scrape webpages, the method used is much the same as if you were writing a scraper in a programming language like Python or Ruby. For that reason, I think this is a good quick way to introduce the basics of certain types of scrapers.

Here’s how it works:

Firstly, you need a list of links to the pages containing data.

Quite often that list might be on a webpage which links to them all, but if not you should look at whether the links have any common structure, for example “http://www.country.com/data/australia” or “http://www.country.com/data/country2″. If it does, then you can generate a list by filling in the part of the URL that changes each time (in this case, the country name or number), assuming you have a list to fill it from (i.e. a list of countries, codes or simple addition).

Second, you need the destination pages to have some consistent structure to them. In other words, they should look the same (although looking the same doesn’t mean they have the same structure – more on this below).

The scraper then cycles through each link in your list, grabs particular bits of data from each linked page (because it is always in the same place), and saves them all in one place.

Scraping with Google Docs using =importXML – a case study

If you’ve not used =importXML before it’s worth catching up on my previous 2 posts How to scrape webpages and ask questions with Google Docs and =importXML and Asking questions of a webpage – and finding out when those answers change.

This takes things a little bit further.

In this case I’m going to scrape some data for a story about local history – the data for which is helpfully published by the Durham Mining Museum. Their homepage has a list of local mining disasters, with the date and cause of the disaster, the name and county of the colliery, the number of deaths, and links to the names and to a page about each colliery.

However, there is not enough geographical information here to map the data. That, instead, is provided on each colliery’s individual page.

So we need to go through this list of webpages, grab the location information, and pull it all together into a single list.

Finding the structure in the HTML

To do this we need to isolate which part of the homepage contains the list. If you right-click on the page to ‘view source’ and search for ‘Haig’ (the first colliery listed) we can see it’s in a table that has a beginning tag like so: <table border=0 align=center style=”font-size:10pt”>

We can use =importXML to grab the contents of the table like so:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]“)

But we only want the links, so how do we grab just those instead of the whole table contents?

The answer is to add more detail to our request. If we look at the HTML that contains the link, it looks like this:

<td valign=top><a href=”http://www.dmm.org.uk/colliery/h029.htm“>Haig&nbsp;Pit</a></td>

So it’s within a <td> tag – but all the data in this table is, not surprisingly, contained within <td> tags. The key is to identify which <td> tag we want – and in this case, it’s always the fourth one in each row.

So we can add “//td[4]” (‘look for the fourth <td> tag’) to our function like so:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]“)

Now we should have a list of the collieries – but we want the actual URL of the page that is linked to with that text. That is contained within the value of the href attribute – or, put in plain language: it comes after the bit that says href=”.

So we just need to add one more bit to our function: “//@href”:

=Importxml(“http://www.dmm.org.uk/mindex.htm”, ”//table[starts-with(@style, ‘font-size:10pt’)]//td[4]//@href”)

So, reading from the far right inwards, this is what it says: “Grab the value of href, within the fourth <td> tag on every row, of the table that has a style value of font-size:10pt”

Note: if there was only one link in every row, we wouldn’t need to include //td[4] to specify the link we needed.

Scraping data from each link in a list

Now we have a list – but we still need to scrape some information from each link in that list

Firstly, we need to identify the location of information that we need on the linked pages. Taking the first page, view source and search for ‘Sheet 89′, which are the first two words of the ‘Map Ref’ line.

The HTML code around that information looks like this:

<td valign=top>(Sheet 89) NX965176, 54° 32' 35" N, 3° 36' 0" W</td>

Looking a little further up, the table that contains this cell uses HTML like this:

<table border=0 width=”95%”>

So if we needed to scrape this information, we would write a function like this:

=importXML(“http://www.dmm.org.uk/colliery/h029.htm”, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)

…And we’d have to write it for every URL.

But because we have a list of URLs, we can do this much quicker by using cell references instead of the full URL.

So. Let’s assume that your formula was in cell C2 (as it is in this example), and the results have formed a column of links going from C2 down to C11. Now we can write a formula that looks at each URL in turn and performs a scrape on it.

In D2 then, we type the following:

=importXML(C2, “//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]“)

If you copy the cell all the way down the column, it will change the function so that it is performed on each neighbouring cell.

In fact, we could simplify things even further by putting the second part of the function in cell D1 – without the quotation marks – like so:

//table[starts-with(@width, ‘95%’)]//tr[2]//td[2]

And then in D2 change the formula to this:

=ImportXML(C2,$D$1)

(The dollar signs keep the D1 reference the same even when the formula is copied down, while C2 will change in each cell)

Now it works – we have the data from each of 8 different pages. Almost.

Troubleshooting with =IF

The problem is that the structure of those pages is not as consistent as we thought: the scraper is producing extra cells of data for some, which knocks out the data that should be appearing there from other cells.

So I’ve used an IF formula to clean that up as follows:

In cell E2 I type the following:

=if(D2=””, ImportXML(C2,$D$1), D2)

Which says ‘If D2 is empty, then run the importXML formula again and put the results here, but if it’s not empty then copy the values across‘

That formula is copied down the column.

But there’s still one empty column even now, so the same formula is used again in column F:

=if(E2=””, ImportXML(C2,$D$1), E2)

A hack, but an instructive one

As I said earlier, this isn’t the best way to write a scraper, but it is a useful way to start to understand how they work, and a quick method if you don’t have huge numbers of pages to scrape. With hundreds of pages, it’s more likely you will miss problems – so watch out for inconsistent structure and data that doesn’t line up.

Source: http://onlinejournalismblog.com/2011/10/14/scraping-data-from-a-list-of-webpages-using-google-docs/