Snowden Used Automated Web Crawler To Scrap Data From Over 1.7 Million Restricted National Security Agency Files — Videos
The Pronk Pops Show Podcasts
The Pronk Pops Show Podcasts Portfolio
Listen To Pronk Pops Podcast or Download Show 202-207
Listen To Pronk Pops Podcast or Download Show 194-201
Listen To Pronk Pops Podcast or Download Show 184-193
Listen To Pronk Pops Podcast or Download Show 174-183
Listen To Pronk Pops Podcast or Download Show 165-173
Listen To Pronk Pops Podcast or Download Show 158-164
Listen To Pronk Pops Podcast or Download Show 151-157
Listen To Pronk Pops Podcast or Download Show 143-150
Listen To Pronk Pops Podcast or Download Show 135-142
Listen To Pronk Pops Podcast or Download Show 131-134
Listen To Pronk Pops Podcast or Download Show 124-130
Listen To Pronk Pops Podcast or Download Shows 121-123
Listen To Pronk Pops Podcast or Download Shows 118-120
Listen To Pronk Pops Podcast or Download Shows 113 -117
Listen To Pronk Pops Podcast or Download Shows 108-111
Listen To Pronk Pops Podcast or Download Shows 106-108
Listen To Pronk Pops Podcast or Download Shows 104-105
Listen To Pronk Pops Podcast or Download Shows 101-103
Listen To Pronk Pops Podcast or Download Shows 98-100
Listen To Pronk Pops Podcast or Download Shows 94-97
Listen To Pronk Pops Podcast or Download Shows 88-90
Listen To Pronk Pops Podcast or Download Shows 84-87
Listen To Pronk Pops Podcast or Download Shows 79-83
Listen To Pronk Pops Podcast or Download Shows 74-78
Listen To Pronk Pops Podcast or Download Shows 71-73
Listen To Pronk Pops Podcast or Download Shows 68-70
Listen To Pronk Pops Podcast or Download Shows 65-67
Listen To Pronk Pops Podcast or Download Shows 62-64
Listen To Pronk Pops Podcast or Download Shows 58-61
Listen To Pronk Pops Podcast or Download Shows 55-57
Listen To Pronk Pops Podcast or Download Shows 52-54
Listen To Pronk Pops Podcast or Download Shows 49-51
Listen To Pronk Pops Podcast or Download Shows 45-48
Listen To Pronk Pops Podcast or Download Shows 41-44
Listen To Pronk Pops Podcast or Download Shows 38-40
Listen To Pronk Pops Podcast or Download Shows 34-37
Listen To Pronk Pops Podcast or Download Shows 30-33
Listen To Pronk Pops Podcast or Download Shows 27-29
Listen To Pronk Pops Podcast or Download Shows 17-26
Listen To Pronk Pops Podcast or Download Shows 16-22
Listen To Pronk Pops Podcast or Download Shows 10-15
Listen To Pronk Pops Podcast or Download Shows 01-09
Story 2: The Pronk Pops Show 207, February 10, 2014, Story 1: Snowden Used Automated Web Crawler To Scrap Data From Over 1.7 Million Restricted National Security Agency Files — Videos
Snowden Used Common, Low-Cost Tool To Get NSA Files: Report
Edward Snowden, v 1.0: NSA Whistleblower William Binney Tells All
NSA whistleblower Edward Snowden: ‘I don’t want to live in a society that does these sort of things’
Dick Cheney ‘This Week’ Interview – Former Vice President on NSA Spying Revelations and GOP Politics
A Massive Surveillance State Glenn Greenwald Exposes Covert NSA Program Collecting Calls, Emails
Web Crawler – CS101 – Udacity
Web scraping the easy way
Python Web Scraping Tutorial 1 (Intro To Web Scraping)
Web Scraping Techniques
Web scraping: Reliably and efficiently pull data from pages that don’t expect it
2014 Best Scraper pro gold email and phone extractor harvestor review- website scraping lead
Lecture -38 Search Engine And Web Crawler – Part-I
Lecture -39 Search Engine And Web Crawlers: Part-II
Web Scraping Review 1
Web Scraping Review 2
Snowden Used Low-Cost Tool to Best N.S.A.
By DAVID E. SANGER and ERIC SCHMITT
Intelligence officials investigating how Edward J. Snowden gained access to a huge trove of the country’s most highly classified documents say they have determined that he used inexpensive and widely available software to “scrape” the National Security Agency’s networks, and kept at it even after he was briefly challenged by agency officials.
Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official. “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated.”
The findings are striking because the N.S.A.’s mission includes protecting the nation’s most sensitive military and intelligence computer systems from cyberattacks, especially the sophisticated attacks that emanate from Russia and China. Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.
Moreover, Mr. Snowden succeeded nearly three years after the WikiLeaks disclosures, in which military and State Department files, of far less sensitivity, were taken using similar techniques.
Mr. Snowden had broad access to the N.S.A.’s complete files because he was working as a technology contractor for the agency in Hawaii, helping to manage the agency’s computer systems in an outpost that focuses on China and North Korea. A web crawler, also called a spider, automatically moves from website to website, following links embedded in each document, and can be programmed to copy everything in its path.
Mr. Snowden appears to have set the parameters for the searches, including which subjects to look for and how deeply to follow links to documents and other data on the N.S.A.’s internal networks. Intelligence officials told a House hearing last week that he accessed roughly 1.7 million files.
Among the materials prominent in the Snowden files are the agency’s shared “wikis,” databases to which intelligence analysts, operatives and others contributed their knowledge. Some of that material indicates that Mr. Snowden “accessed” the documents. But experts say they may well have been downloaded not by him but by the program acting on his behalf.
Agency officials insist that if Mr. Snowden had been working from N.S.A. headquarters at Fort Meade, Md., which was equipped with monitors designed to detect when a huge volume of data was being accessed and downloaded, he almost certainly would have been caught. But because he worked at an agency outpost that had not yet been upgraded with modern security measures, his copying of what the agency’s newly appointed No. 2 officer, Rick Ledgett, recently called “the keys to the kingdom” raised few alarms.
“Some place had to be last” in getting the security upgrade, said one official familiar with Mr. Snowden’s activities. But he added that Mr. Snowden’s actions had been “challenged a few times.”
In at least one instance when he was questioned, Mr. Snowden provided what were later described to investigators as legitimate-sounding explanations for his activities: As a systems administrator he was responsible for conducting routine network maintenance. That could include backing up the computer systems and moving information to local servers, investigators were told.
But from his first days working as a contractor inside the N.S.A.’s aging underground Oahu facility for Dell, the computer maker, and then at a modern office building on the island for Booz Allen Hamilton, the technology consulting firm that sells and operates computer security services used by the government, Mr. Snowden learned something critical about the N.S.A.’s culture: While the organization built enormously high electronic barriers to keep out foreign invaders, it had rudimentary protections against insiders.
“Once you are inside the assumption is that you are supposed to be there, like in most organizations,” said Richard Bejtlich, the chief security strategist for FireEye, a Silicon Valley computer security firm, and a senior fellow at the Brookings Institution. “But that doesn’t explain why they weren’t more vigilant about excessive activity in the system.”
Investigators have yet to answer the question of whether Mr. Snowden happened into an ill-defended outpost of the N.S.A. or sought a job there because he knew it had yet to install the security upgrades that might have stopped him.
“He was either very lucky or very strategic,” one intelligence official said. A new book, “The Snowden Files,” by Luke Harding, a correspondent for The Guardian in London, reports that Mr. Snowden sought his job at Booz Allen because “to get access to a final tranche of documents” he needed “greater security privileges than he enjoyed in his position at Dell.”
Through his lawyer at the American Civil Liberties Union, Mr. Snowden did not specifically address the government’s theory of how he obtained the files, saying in a statement: “It’s ironic that officials are giving classified information to journalists in an effort to discredit me for giving classified information to journalists. The difference is that I did so to inform the public about the government’s actions, and they’re doing so to misinform the public about mine.”
The N.S.A. declined to comment on its investigation or the security changes it has made since the Snowden disclosures. Other intelligence officials familiar with the findings of the investigations underway — there are at least four — were granted anonymity to discuss the investigations.
In interviews, officials declined to say which web crawler Mr. Snowden had used, or whether he had written some of the software himself. Officials said it functioned like Googlebot, a widely used web crawler that Google developed to find and index new pages on the web. What officials cannot explain is why the presence of such software in a highly classified system was not an obvious tip-off to unauthorized activity.
When inserted with Mr. Snowden’s passwords, the web crawler became especially powerful. Investigators determined he probably had also made use of the passwords of some colleagues or supervisors.
But he was also aided by a culture within the N.S.A., officials say, that “compartmented” relatively little information. As a result, a 29-year-old computer engineer, working from a World War II-era tunnel in Oahu and then from downtown Honolulu, had access to unencrypted files that dealt with information as varied as the bulk collection of domestic phone numbers and the intercepted communications of Chancellor Angela Merkel of Germany and dozens of other leaders.
Officials say web crawlers are almost never used on the N.S.A.’s internal systems, making it all the more inexplicable that the one used by Mr. Snowden did not set off alarms as it copied intelligence and military documents stored in the N.S.A.’s systems and linked through the agency’s internal equivalent of Wikipedia.
The answer, officials and outside experts say, is that no one was looking inside the system in Hawaii for hard-to-explain activity. “The N.S.A. had the solution to this problem in hand, but they simply didn’t push it out fast enough,” said James Lewis, a computer expert at the Center for Strategic and International Studies who has talked extensively with intelligence officials about how the Snowden experience could have been avoided.
Nonetheless, the government had warning that it was vulnerable to such attacks. Similar techniques were used by Chelsea Manning, then known as Pfc. Bradley Manning, who was convicted of turning documents and videos over to WikiLeaks in 2010.
Evidence presented during Private Manning’s court-martial for his role as the source for large archives of military and diplomatic files given to WikiLeaks revealed that he had used a program called “wget” to download the batches of files. That program automates the retrieval of large numbers of files, but it is considered less powerful than the tool Mr. Snowden used.
The program’s use prompted changes in how secret information is handled at the State Department, the Pentagon and the intelligence agencies, but recent assessments suggest that those changes may not have gone far enough. For example, arguments have broken out about whether the N.S.A.’s data should all be encrypted “at rest” — when it is stored in servers — to make it harder to search and steal. But that would also make it harder to retrieve for legitimate purposes.
Investigators have found no evidence that Mr. Snowden’s searches were directed by a foreign power, despite suggestions to that effect by the chairman of the House Intelligence Committee, Representative Mike Rogers, Republican of Michigan, in recent television appearances and at a hearing last week.
But that leaves open the question of how Mr. Snowden chose the search terms to obtain his trove of documents, and why, according to James R. Clapper Jr., the director of national intelligence, they yielded a disproportionately large number of documents detailing American military movements, preparations and abilities around the world.
In his statement, Mr. Snowden denied any deliberate effort to gain access to any military information. “They rely on a baseless premise, which is that I was after military information,” Mr. Snowden said.
The head of the Defense Intelligence Agency, Lt. Gen. Michael T. Flynn, told lawmakers last week that Mr. Snowden’s disclosures could tip off adversaries to American military tactics and operations, and force the Pentagon to spend vast sums to safeguard against that. But he admitted a great deal of uncertainty about what Mr. Snowden possessed.
“Everything that he touched, we assume that he took,” said General Flynn, including details of how the military tracks terrorists, of enemies’ vulnerabilities and of American defenses against improvised explosive devices. He added, “We assume the worst case.”
Web search engines and some other sites use Web crawling or spidering software to update their web content or indexes of others sites’ web content. Web crawlers can copy all the pages they visit for later processing by a search engine that indexes the downloaded pages so that users can search them much more quickly.
A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.
The number of possible URLs crawled being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, “Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained.” A crawler must carefully choose at each step which pages to visit next.
The behavior of a Web crawler is the outcome of a combination of policies:
- a selection policy that states which pages to download,
- a re-visit policy that states when to check for changes to the pages,
- a politeness policy that states how to avoid overloading Web sites, and
- a parallelization policy that states how to coordinate distributed web crawlers.
Given the current size of the Web, even large search engines cover only a portion of the publicly available part. A 2005 study showed that large-scale search engines index no more than 40-70% of the indexable Web; a previous study by Steve Lawrence and Lee Giles showed that no search engine indexed more than 16% of the Web in 1999. As a crawler always downloads just a fraction of the Web pages, it is highly desirable that the downloaded fraction contains the most relevant pages and not just a random sample of the Web.
This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Web site). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling.
Cho et al. made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies. The ordering metrics tested were breadth-first, backlink count and partial Pagerank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain. Cho also wrote his Ph.D. dissertation at Stanford on web crawling.
Najork and Wiener performed an actual crawl on 328 million pages, using breadth-first ordering. They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that “the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates.”
Abiteboul designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation). In OPIC, each page is given an initial sum of “cash” that is distributed equally among the pages it points to. It is similar to a Pagerank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of “cash”. Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web.
Boldi et al. used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against depth-first, random ordering and an omniscient strategy. The comparison was based on how well PageRank computed on a partial crawl approximates the true PageRank value. Surprisingly, some visits that accumulate PageRank very quickly (most notably, breadth-first and the omniscient visit) provide very poor progressive approximations.
Baeza-Yates et al. used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies. They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one.
Daneshpajouh et al. designed a community based algorithm for discovering good seeds. Their method crawls web pages with high PageRank from different communities in less iteration in comparison with crawl starting from random seeds. One can extract good seed from a previously-crawled-Web graph using this new method. Using these seeds a new crawl can be very effective.
Restricting followed links
A crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a Web resource’s MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may examine the URL and only request a resource if the URL ends with certain characters such as .html, .htm, .asp, .aspx, .php, .jsp, .jspx or a slash. This strategy may cause numerous HTML Web resources to be unintentionally skipped.
Some crawlers may also avoid requesting any resources that have a “?” in them (are dynamically produced) in order to avoid spider traps that may cause the crawler to download an infinite number of URLs from a Web site. This strategy is unreliable if the site uses a rewrite engine to simplify its URLs.
Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of “.” and “..” segments, and adding trailing slashes to the non-empty path component.
Some crawlers intend to download as many resources as possible from a particular web site. So path-ascending crawler was introduced that would ascend to every path in each URL that it intends to crawl. For example, when given a seed URL of http://llama.org/hamster/monkey/page.html, it will attempt to crawl /hamster/monkey/, /hamster/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling.
Many path-ascending crawlers are also gallery — from a specific page or host.
The importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawler or topical crawlers. The concepts of topical and focused crawling were first introduced by Menczer and by Chakrabarti et al.
The main problem in focused crawling is that in the context of a Web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton in the first web crawler of the early days of the Web. Diligenti et al. propose using the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points.
An example of the focused crawlers are academic crawlers, which crawls free-access academic related documents, such as the citeseerxbot, which is the crawler of CiteSeerX search engine. Other academic search engines are Google Scholar and Microsoft Academic Search etc. Because most academic papers are published in PDF formats, such kind of crawler is particularly interested in crawling PDF, PostScript files, Microsoft Word including their zipped formats. Because of this, general open source crawlers, such as Heritrix, must be customized to filter out other MIME types, or a middleware is used to extract these documents out and import them to the focused crawl database and repository. Identifying whether these documents are academic or not is challenging and can add a significant overhead to the crawling process, so this is performed as a post crawling process using machine learning or regular expression algorithms. These academic documents are usually obtained from home pages of faculties and students or from publication page of research institutes. Because academic documents takes only a small faction in the entire web pages, a good seed selection are important in boosting the efficiencies of these web crawlers. Other academic crawlers may download plain text and HTML files, that contains metadata of academic papers, such as titles, papers, and abstracts. This increases the overall number of papers, but a significant fraction may not provide free PDF downloads.
The Web has a very dynamic nature, and crawling a fraction of the Web can take weeks or months. By the time a Web crawler has finished its crawl, many events could have happened, including creations, updates and deletions.
From the search engine’s point of view, there is a cost associated with not detecting an event, and thus having an outdated copy of a resource. The most-used cost functions are freshness and age.
Freshness: This is a binary measure that indicates whether the local copy is accurate or not. The freshness of a page p in the repository at time t is defined as:
Age: This is a measure that indicates how outdated the local copy is. The age of a page p in the repository, at time t is defined as:
Coffman et al. worked with a definition of the objective of a Web crawler that is equivalent to freshness, but use a different wording: they propose that a crawler must minimize the fraction of time pages remain outdated. They also noted that the problem of Web crawling can be modeled as a multiple-queue, single-server polling system, on which the Web crawler is the server and the Web sites are the queues. Page modifications are the arrival of the customers, and switch-over times are the interval between page accesses to a single Web site. Under this model, mean waiting time for a customer in the polling system is equivalent to the average age for the Web crawler.
The objective of the crawler is to keep the average freshness of pages in its collection as high as possible, or to keep the average age of pages as low as possible. These objectives are not equivalent: in the first case, the crawler is just concerned with how many pages are out-dated, while in the second case, the crawler is concerned with how old the local copies of pages are.
Two simple re-visiting policies were studied by Cho and Garcia-Molina:
Uniform policy: This involves re-visiting all pages in the collection with the same frequency, regardless of their rates of change.
Proportional policy: This involves re-visiting more often the pages that change more frequently. The visiting frequency is directly proportional to the (estimated) change frequency.
(In both cases, the repeated crawling order of pages can be done either in a random or a fixed order.)
Cho and Garcia-Molina proved the surprising result that, in terms of average freshness, the uniform policy outperforms the proportional policy in both a simulated Web and a real Web crawl. Intuitively, the reasoning is that, as web crawlers have a limit to how many pages they can crawl in a given time frame, (1) they will allocate too many new crawls to rapidly changing pages at the expense of less frequently updating pages, and (2) the freshness of rapidly changing pages lasts for shorter period than that of less frequently changing pages. In other words, a proportional policy allocates more resources to crawling frequently updating pages, but experiences less overall freshness time from them.
To improve freshness, the crawler should penalize the elements that change too often. The optimal re-visiting policy is neither the uniform policy nor the proportional policy. The optimal method for keeping average freshness high includes ignoring the pages that change too often, and the optimal for keeping average age low is to use access frequencies that monotonically (and sub-linearly) increase with the rate of change of each page. In both cases, the optimal is closer to the uniform policy than to the proportional policy: as Coffman et al. note, “in order to minimize the expected obsolescence time, the accesses to any particular page should be kept as evenly spaced as possible”. Explicit formulas for the re-visit policy are not attainable in general, but they are obtained numerically, as they depend on the distribution of page changes. Cho and Garcia-Molina show that the exponential distribution is a good fit for describing page changes, while Ipeirotis et al. show how to use statistical tools to discover parameters that affect this distribution. Note that the re-visiting policies considered here regard all pages as homogeneous in terms of quality (“all pages on the Web are worth the same”), something that is not a realistic scenario, so further information about the Web page quality should be included to achieve a better crawling policy.
Crawlers can retrieve data much quicker and in greater depth than human searchers, so they can have a crippling impact on the performance of a site. Needless to say, if a single crawler is performing multiple requests per second and/or downloading large files, a server would have a hard time keeping up with requests from multiple crawlers.
As noted by Koster, the use of Web crawlers is useful for a number of tasks, but comes with a price for the general community. The costs of using Web crawlers include:
- network resources, as crawlers require considerable bandwidth and operate with a high degree of parallelism during a long period of time;
- server overload, especially if the frequency of accesses to a given server is too high;
- poorly written crawlers, which can crash servers or routers, or which download pages they cannot handle; and
- personal crawlers that, if deployed by too many users, can disrupt networks and Web servers.
A partial solution to these problems is the robots exclusion protocol, also known as the robots.txt protocol that is a standard for administrators to indicate which parts of their Web servers should not be accessed by crawlers. This standard does not include a suggestion for the interval of visits to the same server, even though this interval is the most effective way of avoiding server overload. Recently commercial search engines like Google, Ask Jeeves, MSN and Yahoo! Search are able to use an extra “Crawl-delay:” parameter in the robots.txt file to indicate the number of seconds to delay between requests.
The first proposed interval between successive pageloads was 60 seconds. However, if pages were downloaded at this rate from a website with more than 100,000 pages over a perfect connection with zero latency and infinite bandwidth, it would take more than 2 months to download only that entire Web site; also, only a fraction of the resources from that Web server would be used. This does not seem acceptable.
Cho uses 10 seconds as an interval for accesses, and the WIRE crawler uses 15 seconds as the default. The MercatorWeb crawler follows an adaptive politeness policy: if it took t seconds to download a document from a given server, the crawler waits for 10t seconds before downloading the next page. Dill et al. use 1 second.
For those using Web crawlers for research purposes, a more detailed cost-benefit analysis is needed and ethical considerations should be taken into account when deciding where to crawl and how fast to crawl.
Anecdotal evidence from access logs shows that access intervals from known crawlers vary between 20 seconds and 3–4 minutes. It is worth noticing that even when being very polite, and taking all the safeguards to avoid overloading Web servers, some complaints from Web server administrators are received. Brin and Page note that: “… running a crawler which connects to more than half a million servers (…) generates a fair amount of e-mail and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen.”
A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes.
A crawler must not only have a good crawling strategy, as noted in the previous sections, but it should also have a highly optimized architecture.
Shkapenyuk and Suel noted that:
While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability.
Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents others from reproducing the work. There are also emerging concerns about “search engine spamming“, which prevent major search engines from publishing their ranking algorithms.
Web crawlers typically identify themselves to a Web server by using the User-agent field of an HTTP request. Web site administrators typically examine their Web servers‘ log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the Web site administrator may find out more information about the crawler. Examining Web server log is tedious task therefore some administrators use tools such as CrawlTrack or SEO Crawlytics to identify, track and verify Web crawlers. Spambots and other malicious Web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler.
It is important for Web crawlers to identify themselves so that Web site administrators can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a Web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their Web pages to be indexed by a particular search engine.
Crawling the deep web
A vast amount of web pages lie in the deep or invisible web. These pages are typically only accessible by submitting queries to a database, and regular crawlers are unable to find these pages if there are no links that point to them. Google’s Sitemaps protocol and mod oai are intended to allow discovery of these deep-Web resources.
Deep web crawling also multiplies the number of web links to be crawled. Some crawlers only take some of the URLs in
<a href="URL"> form. In some cases, such as the Googlebot, Web crawling is done on all text contained inside the hypertext content, tags, or text.
Strategic approaches may be taken to target deep Web content. With a technique called screen scraping, specialized software may be customized to automatically and repeatedly query a given Web form with the intention of aggregating the resulting data. Such software can be used to span multiple Web forms across multiple Websites. Data extracted from the results of one Web form submission can be taken and applied as input to another Web form thus establishing continuity across the Deep Web in a way not possible with traditional web crawlers.
Web crawler bias
A recent study based on a large scale analysis of robots.txt files showed that certain web crawlers were preferred over others, with Googlebot being the most preferred web crawler.
||This article may contain excessive, poor, or irrelevant examples. Please improve the article by adding more descriptive text and removing less pertinent examples. See Wikipedia’s guide to writing better articles for further suggestions. (May 2012)|
The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features:
- Yahoo! Slurp was the name of the Yahoo! Search crawler until Yahoo! contracted with Microsoft to use bingbot instead.
- Bingbot is the name of Microsoft’s Bing webcrawler. It replaced Msnbot.
- FAST Crawler is a distributed crawler, used by Fast Search & Transfer, and a general description of its architecture is available.
- Googlebot is described in some detail, but the reference is only about an early version of its architecture, which was based in C++ and Python. The crawler was integrated with the indexing process, because text parsing was done for full-text indexing and also for URL extraction. There is a URL server that sends lists of URLs to be fetched by several crawling processes. During parsing, the URLs found were passed to a URL server that checked if the URL have been previously seen. If not, the URL was added to the queue of the URL server.
- PolyBot is a distributed crawler written in C++ and Python, which is composed of a “crawl manager”, one or more “downloaders” and one or more “DNS resolvers”. Collected URLs are added to a queue on disk, and processed later to search for seen URLs in batch mode. The politeness policy considers both third and second level domains (e.g.: http://www.example.com and www2.example.com are third level domains) because third level domains are usually hosted by the same Web server.
- RBSE was the first published web crawler. It was based on two programs: the first program, “spider” maintains a queue in a relational database, and the second program “mite“, is a modified www ASCII browser that downloads the pages from the Web.
- WebCrawler was used to build the first publicly available full-text index of a subset of the Web. It was based on lib-WWW to download pages, and another program to parse and order URLs for breadth-first exploration of the Web graph. It also included a real-time crawler that followed links based on the similarity of the anchor text with the provided query.
- World Wide Web Worm was a crawler used to build a simple index of document titles and URLs. The index could be searched by using the grep Unix command.
- WebFountain is a distributed, modular crawler similar to Mercator but written in C++. It features a “controller” machine that coordinates a series of “ant” machines. After repeatedly downloading pages, a change rate is inferred for each page and a non-linear programming method must be used to solve the equation system for maximizing freshness. The authors recommend to use this crawling order in the early stages of the crawl, and then switch to a uniform crawling order, in which all pages are being visited with the same frequency.
- WebRACE is a crawling and caching module implemented in Java, and used as a part of a more generic system called eRACE. The system receives requests from users for downloading web pages, so the crawler acts in part as a smart proxy server. The system also handles requests for “subscriptions” to Web pages that must be monitored: when the pages change, they must be downloaded by the crawler and the subscriber must be notified. The most outstanding feature of WebRACE is that, while most crawlers start with a set of “seed” URLs, WebRACE is continuously receiving new starting URLs to crawl from.
- DataparkSearch is a crawler and search engine released under the GNU General Public License.
- GNU Wget is a command-line-operated crawler written in C and released under the GPL. It is typically used to mirror Web and FTP sites.
- GRUB is an open source distributed search crawler that Wikia Search used to crawl the web.
- Heritrix is the Internet Archive‘s archival-quality crawler, designed for archiving periodic snapshots of a large portion of the Web. It was written in Java.
- ht://Dig includes a Web crawler in its indexing engine.
- HTTrack uses a Web crawler to create a mirror of a web site for off-line viewing. It is written in C and released under the GPL.
- ICDL Crawler is a cross-platform web crawler written in C++ and intended to crawl Web sites based on Website Parse Templates using computer’s free CPU resources only.
- mnoGoSearch is a crawler, indexer and a search engine written in C and licensed under the GPL (*NIX machines only)
- Norconex HTTP Collector is a web spider, or crawler, written in Java, that aims to make Enterprise Search integrators and developers’s life easier (licensed under GPL).
- Nutch is a crawler written in Java and released under an Apache License. It can be used in conjunction with the Lucene text-indexing package.
- Open Search Server is a search engine and web crawler software release under the GPL.
- PHP-Crawler is a simple PHP and MySQL based crawler released under the BSD License. Easy to install it became popular for small MySQL-driven websites on shared hosting.
- tkWWW Robot, a crawler based on the tkWWW web browser (licensed under GPL).
- Scrapy, an open source webcrawler framework, written in python (licensed under BSD).
- Seeks, a free distributed search engine (licensed under Affero General Public License).
- YaCy, a free distributed search engine, built on principles of peer-to-peer networks (licensed under GPL).
Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox.
Web scraping is closely related to web indexing, which indexes information on the web using a bot or web crawler and is a universal technique adopted by most search engines. In contrast, web scraping focuses more on the transformation of unstructured data on the web, typically in HTML format, into structured data that can be stored and analyzed in a central local database or spreadsheet. Web scraping is also related to web automation, which simulates human browsing using computer software. Uses of web scraping include online price comparison, contact scraping, weather data monitoring, website change detection, research, web mashup and web data integration.
Web scraping is the process of automatically collecting information from the World Wide Web. It is a field with active developments sharing a common goal with the semantic web vision, an ambitious initiative that still requires breakthroughs in text processing, semantic understanding, artificial intelligence and human-computer interactions. Web scraping, instead, favors practical solutions based on existing technologies that are often entirely ad hoc. Therefore, there are different levels of automation that existing web-scraping technologies can provide:
- Human copy-and-paste: Sometimes even the best web-scraping technology cannot replace a human’s manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly set up barriers to prevent machine automation.
- Text grepping and regular expression matching: A simple yet powerful approach to extract information from web pages can be based on the UNIX grep command or regular expression-matching facilities of programming languages (for instance Perl or Python).
- HTTP programming: Static and dynamic web pages can be retrieved by posting HTTP requests to the remote web server using socket programming.
- HTML parsers. Many websites have large collections of pages generated dynamically from an underlying structured source like a database. Data of the same category are typically encoded into similar pages by a common script or template. In data mining, a program that detects such templates in a particular information source, extracts its content and translates it into a relational form called a wrapper. Wrapper generation algorithms assume that input pages of a wrapper induction system conform to a common template and that they can be easily identified in terms of a URL common scheme.. Moreover, some semi-structured data query languages, such as XQuery and the HTQL, can be used to parse HTML pages and to retrieve and transform page content.
- DOM parsing: By embedding a full-fledged web browser, such as the Internet Explorer or the Mozilla browser control, programs can retrieve the dynamic content generated by client-side scripts. These browser controls also parse web pages into a DOM tree, based on which programs can retrieve parts of the pages.
- Web-scraping software: There are many software tools available that can be used to customize web-scraping solutions. This software may attempt to automatically recognize the data structure of a page or provide a recording interface that removes the necessity to manually write web-scraping code, or some scripting functions that can be used to extract and transform content, and database interfaces that can store the scraped data in local databases.
- Vertical aggregation platforms: There are several companies that have developed vertical specific harvesting platforms. These platforms create and monitor a multitude of “bots” for specific verticals with no man-in-the-loop,[clarification needed] and no work related to a specific target site. The preparation involves establishing the knowledge base for the entire vertical and then the platform creates the bots automatically. The platform’s robustness is measured by the quality of the information it retrieves (usually number of fields) and its scalability (how quick it can scale up to hundreds or thousands of sites). This scalability is mostly used to target the Long Tail of sites that common aggregators find complicated or too labor-intensive to harvest content from.
- Semantic annotation recognizing: The pages being scraped may embrace metadata or semantic markups and annotations, which can be used to locate specific data snippets. If the annotations are embedded in the pages, as Microformat does, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer, are stored and managed separately from the web pages, so the scrapers can retrieve data schema and instructions from this layer before scraping the pages.
- Computer vision web-page analyzers. There are efforts using machine learning and computer vision that attempt to identify and extract information from web pages by interpreting pages visually as a human being might.
One of the first major tests of screen scraping involved American Airlines, and a firm called FareChase. AA successfully obtained an injunction from a Texas trial court, stopping FareChase from selling software that enables users to compare online fares if it also searches AA’s website. The airline argued that FareChase’s websearch software trespassed on AA’s servers when it collected the publicly available data. FareChase filed an appeal in March 2003. By June, FareChase and AA agreed to settle and the appeal was dropped.
Southwest Airlines has also challenged screen-scraping practices, and has involved both FareChase and another firm, Outtask, in a legal claim. Southwest Airlines charged that the screen-scraping is Illegal since it is an example of “Computer Fraud and Abuse” and has led to “Damage and Loss” and “Unauthorized Access” of Southwest’s site. It also constitutes “Interference with Business Relations”, “Trespass”, and “Harmful Access by Computer”. They also claimed that screen-scraping constitutes what is legally known as “Misappropriation and Unjust Enrichment”, as well as being a breach of the web site’s user agreement. Outtask denied all these claims, claiming that the prevailing law in this case should be US Copyright law, and that under copyright, the pieces of information being scraped would not be subject to copyright protection. Although the cases were never resolved in the Supreme Court of the United States, FareChase was eventually shuttered by parent company Yahoo!, and Outtask was purchased by travel expense company Concur.
Although these are early scraping decisions, and the theories of liability are not uniform, it is difficult to ignore a pattern emerging that the courts are prepared to protect proprietary content on commercial sites from uses which are undesirable to the owners of such sites. However, the degree of protection for such content is not settled, and will depend on the type of access made by the scraper, the amount of information accessed and copied, the degree to which the access adversely affects the site owner’s system and the types and manner of prohibitions on such conduct.
Outside of the United States, in February 2006, the Danish Maritime and Commercial Court (Copenhagen) ruled that systematic crawling, indexing, and deep linking by portal site ofir.dk of real estate site Home.dk does not conflict with Danish law or the database directive of the European Union.
In 2009 Facebook won one of the first copyright suits against a known web scraper. This laid the groundwork for numerous lawsuits that tie any web scraping with a direct copyright violation and very clear monetary damages. The most recent case being AP v Meltwater, where the courts stripped what is referred to as fair use on the internet.
In a February 2010 case complicated by matters of jurisdiction, Ireland’s An Ard-Chúirt delivered a verdict that illustrates the inchoate state of developing case law. In the case of Ryanair Ltd v Billigfluege.de GmbH, Ireland’s High Court ruled Ryanair’s “click-wrap” agreement to be legally binding. In contrast to the findings of the United States District Court Eastern District of Virginia and those of the Danish Maritime and Commercial Court, Mr. Justice Michael Hanna ruled that the hyperlink to Ryanair’s terms and conditions was plainly visible, and that placing the onus on the user to agree to terms and conditions in order to gain access to online services is sufficient to comprise a contractual relationship. Where here may be another legal issue, of data non-valid or incorrect information, as most of data mixed with lot of “Junk” or “Spam”. Where with respect to data ware house there are: may be limitation of query interface, inconsistence output and rapid change in data by admin without any notice.  The decision is under appeal in Ireland’s Supreme Court, the Cúirt Uachtarach na hÉireann.
Technical measures to stop bots
The administrator of a website can use various measures to stop or slow a bot. Some techniques include:
- Blocking an IP address. This will also block all browsing from that address.
- Disabling any web service API that the website’s system might expose.
- Bots sometimes declare who they are (using user agent strings) and can be blocked on that basis (using robots.txt); ‘googlebot‘ is an example. Some bots make no distinction between themselves and a human browser.
- Bots can be blocked by excess traffic monitoring.
- Bots can sometimes be blocked with tools to verify that it is a real person accessing the site, like a CAPTCHA. Bots are sometimes coded to explicitly break specific Captcha patterns.
- Commercial anti-bot services: Companies offer anti-bot and anti-scraping services for websites. A few web application firewalls have limited bot detection capabilities as well.
- Locating bots with a honeypot or other method to identify the IP addresses of automated crawlers.
- Using CSS sprites to display such data as phone numbers or email addresses, at the cost of accessibility to screen reader users.