Snowden Used Automated Web Crawler To Scrap Data From Over 1.7 Million Restricted National Security Agency Files — Videos

Posted on February 10, 2014. Filed under: Blogroll, Books, Business, College, Communications, Computers, Computers, Constitution, Crime, Culture, Economics, Education, Employment, Foreign Policy, Fraud, government, government spending, Investments, Law, liberty, Life, Links, Literacy, Math, media, People, Philosophy, Photos, Politics, Radio, Rants, Raves, Regulations, Resources, Security, Strategy, Talk Radio, Technology, Unemployment, Video, War, Wealth, Wisdom | Tags: , , , , , , , , , , , , , , , , , , , , , , , , |

Project_1

The Pronk Pops Show Podcasts

Pronk Pops Show 207: February 10, 2014

Pronk Pops Show 206: February 7, 2014

Pronk Pops Show 205: February 5, 2014

Pronk Pops Show 204: February 4, 2014

Pronk Pops Show 203: February 3, 2014

Pronk Pops Show 202: January 31, 2014

Pronk Pops Show 201: January 30, 2014

Pronk Pops Show 200: January 29, 2014

Pronk Pops Show 199: January 28, 2014

Pronk Pops Show 198: January 27, 2014

Pronk Pops Show 197: January 24, 2014

Pronk Pops Show 196: January 22, 2014

Pronk Pops Show 195: January 21, 2014

Pronk Pops Show 194: January 17, 2014

Pronk Pops Show 193: January 16, 2014

Pronk Pops Show 192: January 14, 2014

Pronk Pops Show 191: January 13, 2014

Pronk Pops Show 190: January 10, 2014

Pronk Pops Show 189: January 9, 2014

Pronk Pops Show 188: January 8, 2014

Pronk Pops Show 187: January 7, 2014

Pronk Pops Show 186: January 6, 2014

Pronk Pops Show 185: January 3, 2014

Pronk Pops Show 184: December 19, 2013

Pronk Pops Show 183: December 17, 2013

Pronk Pops Show 182: December 16, 2013

Pronk Pops Show 181: December 13, 2013

Pronk Pops Show 180: December 12, 2013

Pronk Pops Show 179: December 11, 2013

Pronk Pops Show 178: December 5, 2013

Pronk Pops Show 177: December 2, 2013

Pronk Pops Show 176: November 27, 2013

Pronk Pops Show 175: November 26, 2013

Pronk Pops Show 174: November 25, 2013

Pronk Pops Show 173: November 22, 2013

Pronk Pops Show 172: November 21, 2013

Pronk Pops Show 171: November 20, 2013

Pronk Pops Show 170: November 19, 2013

Pronk Pops Show 169: November 18, 2013

Pronk Pops Show 168: November 15, 2013

Pronk Pops Show 167: November 14, 2013

Pronk Pops Show 166: November 13, 2013

Pronk Pops Show 165: November 12, 2013

Pronk Pops Show 164: November 11, 2013

Pronk Pops Show 163: November 8, 2013

Pronk Pops Show 162: November 7, 2013

Pronk Pops Show 161: November 4, 2013

Pronk Pops Show 160: November 1, 2013

The Pronk Pops Show Podcasts Portfolio

Listen To Pronk Pops Podcast or Download Show 202-207

Listen To Pronk Pops Podcast or Download Show 194-201

Listen To Pronk Pops Podcast or Download Show 184-193

Listen To Pronk Pops Podcast or Download Show 174-183

Listen To Pronk Pops Podcast or Download Show 165-173

Listen To Pronk Pops Podcast or Download Show 158-164

Listen To Pronk Pops Podcast or Download Show 151-157

Listen To Pronk Pops Podcast or Download Show 143-150

Listen To Pronk Pops Podcast or Download Show 135-142

Listen To Pronk Pops Podcast or Download Show 131-134

Listen To Pronk Pops Podcast or Download Show 124-130

Listen To Pronk Pops Podcast or Download Shows 121-123

Listen To Pronk Pops Podcast or Download Shows 118-120

Listen To Pronk Pops Podcast or Download Shows 113 -117

Listen To Pronk Pops Podcast or Download Show 112

Listen To Pronk Pops Podcast or Download Shows 108-111

Listen To Pronk Pops Podcast or Download Shows 106-108

Listen To Pronk Pops Podcast or Download Shows 104-105

Listen To Pronk Pops Podcast or Download Shows 101-103

Listen To Pronk Pops Podcast or Download Shows 98-100

Listen To Pronk Pops Podcast or Download Shows 94-97

Listen To Pronk Pops Podcast or Download Shows 93

Listen To Pronk Pops Podcast or Download Shows 92

Listen To Pronk Pops Podcast or Download Shows 91

Listen To Pronk Pops Podcast or Download Shows 88-90

Listen To Pronk Pops Podcast or Download Shows 84-87

Listen To Pronk Pops Podcast or Download Shows 79-83

Listen To Pronk Pops Podcast or Download Shows 74-78

Listen To Pronk Pops Podcast or Download Shows 71-73

Listen To Pronk Pops Podcast or Download Shows 68-70

Listen To Pronk Pops Podcast or Download Shows 65-67

Listen To Pronk Pops Podcast or Download Shows 62-64

Listen To Pronk Pops Podcast or Download Shows 58-61

Listen To Pronk Pops Podcast or Download Shows 55-57

Listen To Pronk Pops Podcast or Download Shows 52-54

Listen To Pronk Pops Podcast or Download Shows 49-51

Listen To Pronk Pops Podcast or Download Shows 45-48

Listen To Pronk Pops Podcast or Download Shows 41-44

Listen To Pronk Pops Podcast or Download Shows 38-40

Listen To Pronk Pops Podcast or Download Shows 34-37

Listen To Pronk Pops Podcast or Download Shows 30-33

Listen To Pronk Pops Podcast or Download Shows 27-29

Listen To Pronk Pops Podcast or Download Shows 17-26

Listen To Pronk Pops Podcast or Download Shows 16-22

Listen To Pronk Pops Podcast or Download Shows 10-15

Listen To Pronk Pops Podcast or Download Shows 01-09

Story 2: The Pronk Pops Show 207, February 10, 2014, Story 1: Snowden Used Automated Web Crawler To Scrap Data From Over 1.7 Million Restricted National Security Agency Files — Videos

Snowden Used Common, Low-Cost Tool To Get NSA Files: Report

Edward Snowden, v 1.0: NSA Whistleblower William Binney Tells All

NSA whistleblower Edward Snowden: ‘I don’t want to live in a society that does these sort of things’

Dick Cheney ‘This Week’ Interview – Former Vice President on NSA Spying Revelations and GOP Politics

A Massive Surveillance State Glenn Greenwald Exposes Covert NSA Program Collecting Calls, Emails

Web Crawler – CS101 – Udacity

Web scraping the easy way

Python Web Scraping Tutorial 1 (Intro To Web Scraping)

Web Scraping Techniques

Web scraping: Reliably and efficiently pull data from pages that don’t expect it

2014 Best Scraper pro gold email and phone extractor harvestor review- website scraping lead

Lecture -38 Search Engine And Web Crawler – Part-I

Lecture -39 Search Engine And Web Crawlers: Part-II

Web Scraping Review 1

Web Scraping Review 2

Snowden Used Low-Cost Tool to Best N.S.A.

By DAVID E. SANGER and ERIC SCHMITT

Intelligence officials investigating how Edward J. Snowden gained access to a huge trove of the country’s most highly classified documents say they have determined that he used inexpensive and widely available software to “scrape” the National Security Agency’s networks, and kept at it even after he was briefly challenged by agency officials.

Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official. “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated.”

The findings are striking because the N.S.A.’s mission includes protecting the nation’s most sensitive military and intelligence computer systems from cyberattacks, especially the sophisticated attacks that emanate from Russia and China. Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.

Launch media viewer
Officials say Mr. Snowden used “web crawler” software. Channel 4/Agence France-Presse — Getty Images

Moreover, Mr. Snowden succeeded nearly three years after the WikiLeaks disclosures, in which military and State Department files, of far less sensitivity, were taken using similar techniques.

Mr. Snowden had broad access to the N.S.A.’s complete files because he was working as a technology contractor for the agency in Hawaii, helping to manage the agency’s computer systems in an outpost that focuses on China and North Korea. A web crawler, also called a spider, automatically moves from website to website, following links embedded in each document, and can be programmed to copy everything in its path.

Mr. Snowden appears to have set the parameters for the searches, including which subjects to look for and how deeply to follow links to documents and other data on the N.S.A.’s internal networks. Intelligence officials told a House hearing last week that he accessed roughly 1.7 million files.

Among the materials prominent in the Snowden files are the agency’s shared “wikis,” databases to which intelligence analysts, operatives and others contributed their knowledge. Some of that material indicates that Mr. Snowden “accessed” the documents. But experts say they may well have been downloaded not by him but by the program acting on his behalf.

Agency officials insist that if Mr. Snowden had been working from N.S.A. headquarters at Fort Meade, Md., which was equipped with monitors designed to detect when a huge volume of data was being accessed and downloaded, he almost certainly would have been caught. But because he worked at an agency outpost that had not yet been upgraded with modern security measures, his copying of what the agency’s newly appointed No. 2 officer, Rick Ledgett, recently called “the keys to the kingdom” raised few alarms.

“Some place had to be last” in getting the security upgrade, said one official familiar with Mr. Snowden’s activities. But he added that Mr. Snowden’s actions had been “challenged a few times.”

In at least one instance when he was questioned, Mr. Snowden provided what were later described to investigators as legitimate-sounding explanations for his activities: As a systems administrator he was responsible for conducting routine network maintenance. That could include backing up the computer systems and moving information to local servers, investigators were told.

But from his first days working as a contractor inside the N.S.A.’s aging underground Oahu facility for Dell, the computer maker, and then at a modern office building on the island for Booz Allen Hamilton, the technology consulting firm that sells and operates computer security services used by the government, Mr. Snowden learned something critical about the N.S.A.’s culture: While the organization built enormously high electronic barriers to keep out foreign invaders, it had rudimentary protections against insiders.

“Once you are inside the assumption is that you are supposed to be there, like in most organizations,” said Richard Bejtlich, the chief security strategist for FireEye, a Silicon Valley computer security firm, and a senior fellow at the Brookings Institution. “But that doesn’t explain why they weren’t more vigilant about excessive activity in the system.”

Investigators have yet to answer the question of whether Mr. Snowden happened into an ill-defended outpost of the N.S.A. or sought a job there because he knew it had yet to install the security upgrades that might have stopped him.

“He was either very lucky or very strategic,” one intelligence official said. A new book, “The Snowden Files,” by Luke Harding, a correspondent for The Guardian in London, reports that Mr. Snowden sought his job at Booz Allen because “to get access to a final tranche of documents” he needed “greater security privileges than he enjoyed in his position at Dell.”

Through his lawyer at the American Civil Liberties Union, Mr. Snowden did not specifically address the government’s theory of how he obtained the files, saying in a statement: “It’s ironic that officials are giving classified information to journalists in an effort to discredit me for giving classified information to journalists. The difference is that I did so to inform the public about the government’s actions, and they’re doing so to misinform the public about mine.”

Launch media viewer
The headquarters of Booz Allen Hamilton, one of Edward J. Snowden’s former employers, in McLean, Va. He had broad access to National Security Agency files as a contractor in Hawaii. Michael Reynolds/European Pressphoto Agency

The N.S.A. declined to comment on its investigation or the security changes it has made since the Snowden disclosures. Other intelligence officials familiar with the findings of the investigations underway — there are at least four — were granted anonymity to discuss the investigations.

In interviews, officials declined to say which web crawler Mr. Snowden had used, or whether he had written some of the software himself. Officials said it functioned like Googlebot, a widely used web crawler that Google developed to find and index new pages on the web. What officials cannot explain is why the presence of such software in a highly classified system was not an obvious tip-off to unauthorized activity.

When inserted with Mr. Snowden’s passwords, the web crawler became especially powerful. Investigators determined he probably had also made use of the passwords of some colleagues or supervisors.

But he was also aided by a culture within the N.S.A., officials say, that “compartmented” relatively little information. As a result, a 29-year-old computer engineer, working from a World War II-era tunnel in Oahu and then from downtown Honolulu, had access to unencrypted files that dealt with information as varied as the bulk collection of domestic phone numbers and the intercepted communications of Chancellor Angela Merkel of Germany and dozens of other leaders.

http://www.nytimes.com/2014/02/09/us/snowden-used-low-cost-tool-to-best-nsa.html?_r=0

Officials say web crawlers are almost never used on the N.S.A.’s internal systems, making it all the more inexplicable that the one used by Mr. Snowden did not set off alarms as it copied intelligence and military documents stored in the N.S.A.’s systems and linked through the agency’s internal equivalent of Wikipedia.

The answer, officials and outside experts say, is that no one was looking inside the system in Hawaii for hard-to-explain activity. “The N.S.A. had the solution to this problem in hand, but they simply didn’t push it out fast enough,” said James Lewis, a computer expert at the Center for Strategic and International Studies who has talked extensively with intelligence officials about how the Snowden experience could have been avoided.

Nonetheless, the government had warning that it was vulnerable to such attacks. Similar techniques were used by Chelsea Manning, then known as Pfc. Bradley Manning, who was convicted of turning documents and videos over to WikiLeaks in 2010.

Evidence presented during Private Manning’s court-martial for his role as the source for large archives of military and diplomatic files given to WikiLeaks revealed that he had used a program called “wget” to download the batches of files. That program automates the retrieval of large numbers of files, but it is considered less powerful than the tool Mr. Snowden used.

The program’s use prompted changes in how secret information is handled at the State Department, the Pentagon and the intelligence agencies, but recent assessments suggest that those changes may not have gone far enough. For example, arguments have broken out about whether the N.S.A.’s data should all be encrypted “at rest” — when it is stored in servers — to make it harder to search and steal. But that would also make it harder to retrieve for legitimate purposes.

Investigators have found no evidence that Mr. Snowden’s searches were directed by a foreign power, despite suggestions to that effect by the chairman of the House Intelligence Committee, Representative Mike Rogers, Republican of Michigan, in recent television appearances and at a hearing last week.

But that leaves open the question of how Mr. Snowden chose the search terms to obtain his trove of documents, and why, according to James R. Clapper Jr., the director of national intelligence, they yielded a disproportionately large number of documents detailing American military movements, preparations and abilities around the world.

In his statement, Mr. Snowden denied any deliberate effort to gain access to any military information. “They rely on a baseless premise, which is that I was after military information,” Mr. Snowden said.

The head of the Defense Intelligence Agency, Lt. Gen. Michael T. Flynn, told lawmakers last week that Mr. Snowden’s disclosures could tip off adversaries to American military tactics and operations, and force the Pentagon to spend vast sums to safeguard against that. But he admitted a great deal of uncertainty about what Mr. Snowden possessed.

“Everything that he touched, we assume that he took,” said General Flynn, including details of how the military tracks terrorists, of enemies’ vulnerabilities and of American defenses against improvised explosive devices. He added, “We assume the worst case.”

ttp://www.nytimes.com/2014/02/09/us/snowden-used-low-cost-tool-to-best-nsa.html?_r=0

Web Crawler

A Web crawler is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing.

A Web crawler may also be called a Web spider,[1] an ant, an automatic indexer,[2] or (in the FOAF software context) a Web scutter.[3]

Web search engines and some other sites use Web crawling or spidering software to update their web content or indexes of others sites’ web content. Web crawlers can copy all the pages they visit for later processing by a search engine that indexes the downloaded pages so that users can search them much more quickly.

Crawlers can validate hyperlinks and HTML code. They can also be used for web scraping (see also data-driven programming).

Overview

A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible URLs crawled being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

As Edwards et al. noted, “Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained.”[4] A crawler must carefully choose at each step which pages to visit next.

Crawling policy

The behavior of a Web crawler is the outcome of a combination of policies:[5]

  • a selection policy that states which pages to download,
  • a re-visit policy that states when to check for changes to the pages,
  • a politeness policy that states how to avoid overloading Web sites, and
  • a parallelization policy that states how to coordinate distributed web crawlers.

Selection policy

Given the current size of the Web, even large search engines cover only a portion of the publicly available part. A 2005 study showed that large-scale search engines index no more than 40-70% of the indexable Web;[6] a previous study by Steve Lawrence and Lee Giles showed that no search engine indexed more than 16% of the Web in 1999.[7] As a crawler always downloads just a fraction of the Web pages, it is highly desirable that the downloaded fraction contains the most relevant pages and not just a random sample of the Web.

This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Web site). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling.

Cho et al. made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies.[8] The ordering metrics tested were breadth-first, backlink count and partial Pagerank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain. Cho also wrote his Ph.D. dissertation at Stanford on web crawling.[9]

Najork and Wiener performed an actual crawl on 328 million pages, using breadth-first ordering.[10] They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that “the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates.”

Abiteboul designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation).[11] In OPIC, each page is given an initial sum of “cash” that is distributed equally among the pages it points to. It is similar to a Pagerank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of “cash”. Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web.

Boldi et al. used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against depth-first, random ordering and an omniscient strategy. The comparison was based on how well PageRank computed on a partial crawl approximates the true PageRank value. Surprisingly, some visits that accumulate PageRank very quickly (most notably, breadth-first and the omniscient visit) provide very poor progressive approximations.[12][13]

Baeza-Yates et al. used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies.[14] They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one.

Daneshpajouh et al. designed a community based algorithm for discovering good seeds.[15] Their method crawls web pages with high PageRank from different communities in less iteration in comparison with crawl starting from random seeds. One can extract good seed from a previously-crawled-Web graph using this new method. Using these seeds a new crawl can be very effective.

Restricting followed links

A crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a Web resource’s MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may examine the URL and only request a resource if the URL ends with certain characters such as .html, .htm, .asp, .aspx, .php, .jsp, .jspx or a slash. This strategy may cause numerous HTML Web resources to be unintentionally skipped.

Some crawlers may also avoid requesting any resources that have a “?” in them (are dynamically produced) in order to avoid spider traps that may cause the crawler to download an infinite number of URLs from a Web site. This strategy is unreliable if the site uses a rewrite engine to simplify its URLs.

URL normalization

Main article: URL normalization

Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of “.” and “..” segments, and adding trailing slashes to the non-empty path component.[16]

Path-ascending crawling

Some crawlers intend to download as many resources as possible from a particular web site. So path-ascending crawler was introduced that would ascend to every path in each URL that it intends to crawl.[17] For example, when given a seed URL of http://llama.org/hamster/monkey/page.html, it will attempt to crawl /hamster/monkey/, /hamster/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling.

Many path-ascending crawlers are also gallery — from a specific page or host.

Focused crawling

Main article: Focused crawler

The importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawler or topical crawlers. The concepts of topical and focused crawling were first introduced by Menczer[18][19] and by Chakrabarti et al.[20]

The main problem in focused crawling is that in the context of a Web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton[21] in the first web crawler of the early days of the Web. Diligenti et al.[22] propose using the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points.

Academic-focused crawler

An example of the focused crawlers are academic crawlers, which crawls free-access academic related documents, such as the citeseerxbot, which is the crawler of CiteSeerX search engine. Other academic search engines are Google Scholar and Microsoft Academic Search etc. Because most academic papers are published in PDF formats, such kind of crawler is particularly interested in crawling PDF, PostScript files, Microsoft Word including their zipped formats. Because of this, general open source crawlers, such as Heritrix, must be customized to filter out other MIME types, or a middleware is used to extract these documents out and import them to the focused crawl database and repository.[23] Identifying whether these documents are academic or not is challenging and can add a significant overhead to the crawling process, so this is performed as a post crawling process using machine learning or regular expression algorithms. These academic documents are usually obtained from home pages of faculties and students or from publication page of research institutes. Because academic documents takes only a small faction in the entire web pages, a good seed selection are important in boosting the efficiencies of these web crawlers.[24] Other academic crawlers may download plain text and HTML files, that contains metadata of academic papers, such as titles, papers, and abstracts. This increases the overall number of papers, but a significant fraction may not provide free PDF downloads.

Re-visit policy

The Web has a very dynamic nature, and crawling a fraction of the Web can take weeks or months. By the time a Web crawler has finished its crawl, many events could have happened, including creations, updates and deletions.

From the search engine’s point of view, there is a cost associated with not detecting an event, and thus having an outdated copy of a resource. The most-used cost functions are freshness and age.[25]

Freshness: This is a binary measure that indicates whether the local copy is accurate or not. The freshness of a page p in the repository at time t is defined as:

F_{p}(t)={\begin{cases}1&{{\rm {if}}}~p~{{\rm {~is~equal~to~the~local~copy~at~time}}}~t\&{{\rm {otherwise}}}\end{cases}}

Age: This is a measure that indicates how outdated the local copy is. The age of a page p in the repository, at time t is defined as:

A_{p}(t)={\begin{cases}0&{{\rm {if}}}~p~{{\rm {~is~not~modified~at~time}}}~t\\t-{{\rm {modification~time~of}}}~p&{{\rm {otherwise}}}\end{cases}}

Coffman et al. worked with a definition of the objective of a Web crawler that is equivalent to freshness, but use a different wording: they propose that a crawler must minimize the fraction of time pages remain outdated. They also noted that the problem of Web crawling can be modeled as a multiple-queue, single-server polling system, on which the Web crawler is the server and the Web sites are the queues. Page modifications are the arrival of the customers, and switch-over times are the interval between page accesses to a single Web site. Under this model, mean waiting time for a customer in the polling system is equivalent to the average age for the Web crawler.[26]

The objective of the crawler is to keep the average freshness of pages in its collection as high as possible, or to keep the average age of pages as low as possible. These objectives are not equivalent: in the first case, the crawler is just concerned with how many pages are out-dated, while in the second case, the crawler is concerned with how old the local copies of pages are.

Two simple re-visiting policies were studied by Cho and Garcia-Molina:[27]

Uniform policy: This involves re-visiting all pages in the collection with the same frequency, regardless of their rates of change.

Proportional policy: This involves re-visiting more often the pages that change more frequently. The visiting frequency is directly proportional to the (estimated) change frequency.

(In both cases, the repeated crawling order of pages can be done either in a random or a fixed order.)

Cho and Garcia-Molina proved the surprising result that, in terms of average freshness, the uniform policy outperforms the proportional policy in both a simulated Web and a real Web crawl. Intuitively, the reasoning is that, as web crawlers have a limit to how many pages they can crawl in a given time frame, (1) they will allocate too many new crawls to rapidly changing pages at the expense of less frequently updating pages, and (2) the freshness of rapidly changing pages lasts for shorter period than that of less frequently changing pages. In other words, a proportional policy allocates more resources to crawling frequently updating pages, but experiences less overall freshness time from them.

To improve freshness, the crawler should penalize the elements that change too often.[28] The optimal re-visiting policy is neither the uniform policy nor the proportional policy. The optimal method for keeping average freshness high includes ignoring the pages that change too often, and the optimal for keeping average age low is to use access frequencies that monotonically (and sub-linearly) increase with the rate of change of each page. In both cases, the optimal is closer to the uniform policy than to the proportional policy: as Coffman et al. note, “in order to minimize the expected obsolescence time, the accesses to any particular page should be kept as evenly spaced as possible”.[26] Explicit formulas for the re-visit policy are not attainable in general, but they are obtained numerically, as they depend on the distribution of page changes. Cho and Garcia-Molina show that the exponential distribution is a good fit for describing page changes,[28] while Ipeirotis et al. show how to use statistical tools to discover parameters that affect this distribution.[29] Note that the re-visiting policies considered here regard all pages as homogeneous in terms of quality (“all pages on the Web are worth the same”), something that is not a realistic scenario, so further information about the Web page quality should be included to achieve a better crawling policy.

Politeness policy

Crawlers can retrieve data much quicker and in greater depth than human searchers, so they can have a crippling impact on the performance of a site. Needless to say, if a single crawler is performing multiple requests per second and/or downloading large files, a server would have a hard time keeping up with requests from multiple crawlers.

As noted by Koster, the use of Web crawlers is useful for a number of tasks, but comes with a price for the general community.[30] The costs of using Web crawlers include:

  • network resources, as crawlers require considerable bandwidth and operate with a high degree of parallelism during a long period of time;
  • server overload, especially if the frequency of accesses to a given server is too high;
  • poorly written crawlers, which can crash servers or routers, or which download pages they cannot handle; and
  • personal crawlers that, if deployed by too many users, can disrupt networks and Web servers.

A partial solution to these problems is the robots exclusion protocol, also known as the robots.txt protocol that is a standard for administrators to indicate which parts of their Web servers should not be accessed by crawlers.[31] This standard does not include a suggestion for the interval of visits to the same server, even though this interval is the most effective way of avoiding server overload. Recently commercial search engines like Google, Ask Jeeves, MSN and Yahoo! Search are able to use an extra “Crawl-delay:” parameter in the robots.txt file to indicate the number of seconds to delay between requests.

The first proposed interval between successive pageloads was 60 seconds.[32] However, if pages were downloaded at this rate from a website with more than 100,000 pages over a perfect connection with zero latency and infinite bandwidth, it would take more than 2 months to download only that entire Web site; also, only a fraction of the resources from that Web server would be used. This does not seem acceptable.

Cho uses 10 seconds as an interval for accesses,[27] and the WIRE crawler uses 15 seconds as the default.[33] The MercatorWeb crawler follows an adaptive politeness policy: if it took t seconds to download a document from a given server, the crawler waits for 10t seconds before downloading the next page.[34] Dill et al. use 1 second.[35]

For those using Web crawlers for research purposes, a more detailed cost-benefit analysis is needed and ethical considerations should be taken into account when deciding where to crawl and how fast to crawl.[36]

Anecdotal evidence from access logs shows that access intervals from known crawlers vary between 20 seconds and 3–4 minutes. It is worth noticing that even when being very polite, and taking all the safeguards to avoid overloading Web servers, some complaints from Web server administrators are received. Brin and Page note that: “… running a crawler which connects to more than half a million servers (…) generates a fair amount of e-mail and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen.”[37]

Parallelisation policy

A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes.

Architectures

High-level architecture of a standard Web crawler

A crawler must not only have a good crawling strategy, as noted in the previous sections, but it should also have a highly optimized architecture.

Shkapenyuk and Suel noted that:[38]

While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability.

Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents others from reproducing the work. There are also emerging concerns about “search engine spamming“, which prevent major search engines from publishing their ranking algorithms.

Crawler identification

Web crawlers typically identify themselves to a Web server by using the User-agent field of an HTTP request. Web site administrators typically examine their Web servers‘ log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the Web site administrator may find out more information about the crawler. Examining Web server log is tedious task therefore some administrators use tools such as CrawlTrack[39] or SEO Crawlytics[40] to identify, track and verify Web crawlers. Spambots and other malicious Web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler.

It is important for Web crawlers to identify themselves so that Web site administrators can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a Web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their Web pages to be indexed by a particular search engine.

Crawling the deep web

A vast amount of web pages lie in the deep or invisible web.[41] These pages are typically only accessible by submitting queries to a database, and regular crawlers are unable to find these pages if there are no links that point to them. Google’s Sitemaps protocol and mod oai[42] are intended to allow discovery of these deep-Web resources.

Deep web crawling also multiplies the number of web links to be crawled. Some crawlers only take some of the URLs in <a href="URL"> form. In some cases, such as the Googlebot, Web crawling is done on all text contained inside the hypertext content, tags, or text.

Strategic approaches may be taken to target deep Web content. With a technique called screen scraping, specialized software may be customized to automatically and repeatedly query a given Web form with the intention of aggregating the resulting data. Such software can be used to span multiple Web forms across multiple Websites. Data extracted from the results of one Web form submission can be taken and applied as input to another Web form thus establishing continuity across the Deep Web in a way not possible with traditional web crawlers.

Pages built on AJAX are among those causing problems to web crawlers. Google has proposed a format of AJAX calls that their bot can recognize and index[43]

Web crawler bias

A recent study based on a large scale analysis of robots.txt files showed that certain web crawlers were preferred over others, with Googlebot being the most preferred web crawler.[citation needed]

Examples

This article may contain excessive, poor, or irrelevant examples. Please improve the article by adding more descriptive text and removing less pertinent examples. See Wikipedia’s guide to writing better articles for further suggestions. (May 2012)

The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features:

  • Yahoo! Slurp was the name of the Yahoo! Search crawler until Yahoo! contracted with Microsoft to use bingbot instead.
  • Bingbot is the name of Microsoft’s Bing webcrawler. It replaced Msnbot.
  • FAST Crawler[44] is a distributed crawler, used by Fast Search & Transfer, and a general description of its architecture is available.[citation needed]
  • Googlebot[37] is described in some detail, but the reference is only about an early version of its architecture, which was based in C++ and Python. The crawler was integrated with the indexing process, because text parsing was done for full-text indexing and also for URL extraction. There is a URL server that sends lists of URLs to be fetched by several crawling processes. During parsing, the URLs found were passed to a URL server that checked if the URL have been previously seen. If not, the URL was added to the queue of the URL server.
  • PolyBot[38] is a distributed crawler written in C++ and Python, which is composed of a “crawl manager”, one or more “downloaders” and one or more “DNS resolvers”. Collected URLs are added to a queue on disk, and processed later to search for seen URLs in batch mode. The politeness policy considers both third and second level domains (e.g.: http://www.example.com and www2.example.com are third level domains) because third level domains are usually hosted by the same Web server.
  • RBSE[45] was the first published web crawler. It was based on two programs: the first program, “spider” maintains a queue in a relational database, and the second program “mite“, is a modified www ASCII browser that downloads the pages from the Web.
  • WebCrawler[21] was used to build the first publicly available full-text index of a subset of the Web. It was based on lib-WWW to download pages, and another program to parse and order URLs for breadth-first exploration of the Web graph. It also included a real-time crawler that followed links based on the similarity of the anchor text with the provided query.
  • World Wide Web Worm[46] was a crawler used to build a simple index of document titles and URLs. The index could be searched by using the grep Unix command.
  • WebFountain[4] is a distributed, modular crawler similar to Mercator but written in C++. It features a “controller” machine that coordinates a series of “ant” machines. After repeatedly downloading pages, a change rate is inferred for each page and a non-linear programming method must be used to solve the equation system for maximizing freshness. The authors recommend to use this crawling order in the early stages of the crawl, and then switch to a uniform crawling order, in which all pages are being visited with the same frequency.
  • WebRACE[47] is a crawling and caching module implemented in Java, and used as a part of a more generic system called eRACE. The system receives requests from users for downloading web pages, so the crawler acts in part as a smart proxy server. The system also handles requests for “subscriptions” to Web pages that must be monitored: when the pages change, they must be downloaded by the crawler and the subscriber must be notified. The most outstanding feature of WebRACE is that, while most crawlers start with a set of “seed” URLs, WebRACE is continuously receiving new starting URLs to crawl from.

In addition to the specific crawler architectures listed above, there are general crawler architectures published by Cho[48] and Chakrabarti.[49]

Open-source crawlers

  • DataparkSearch is a crawler and search engine released under the GNU General Public License.
  • GNU Wget is a command-line-operated crawler written in C and released under the GPL. It is typically used to mirror Web and FTP sites.
  • GRUB is an open source distributed search crawler that Wikia Search used to crawl the web.
  • Heritrix is the Internet Archive‘s archival-quality crawler, designed for archiving periodic snapshots of a large portion of the Web. It was written in Java.
  • ht://Dig includes a Web crawler in its indexing engine.
  • HTTrack uses a Web crawler to create a mirror of a web site for off-line viewing. It is written in C and released under the GPL.
  • ICDL Crawler is a cross-platform web crawler written in C++ and intended to crawl Web sites based on Website Parse Templates using computer’s free CPU resources only.
  • mnoGoSearch is a crawler, indexer and a search engine written in C and licensed under the GPL (*NIX machines only)
  • Norconex HTTP Collector is a web spider, or crawler, written in Java, that aims to make Enterprise Search integrators and developers’s life easier (licensed under GPL).
  • Nutch is a crawler written in Java and released under an Apache License. It can be used in conjunction with the Lucene text-indexing package.
  • Open Search Server is a search engine and web crawler software release under the GPL.
  • PHP-Crawler is a simple PHP and MySQL based crawler released under the BSD License. Easy to install it became popular for small MySQL-driven websites on shared hosting.
  • tkWWW Robot, a crawler based on the tkWWW web browser (licensed under GPL).
  • Scrapy, an open source webcrawler framework, written in python (licensed under BSD).
  • Seeks, a free distributed search engine (licensed under Affero General Public License).
  • YaCy, a free distributed search engine, built on principles of peer-to-peer networks (licensed under GPL).

http://en.wikipedia.org/wiki/Web_crawler

Web scraping

Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox.

Web scraping is closely related to web indexing, which indexes information on the web using a bot or web crawler and is a universal technique adopted by most search engines. In contrast, web scraping focuses more on the transformation of unstructured data on the web, typically in HTML format, into structured data that can be stored and analyzed in a central local database or spreadsheet. Web scraping is also related to web automation, which simulates human browsing using computer software. Uses of web scraping include online price comparison, contact scraping, weather data monitoring, website change detection, research, web mashup and web data integration.

Techniques

Web scraping is the process of automatically collecting information from the World Wide Web. It is a field with active developments sharing a common goal with the semantic web vision, an ambitious initiative that still requires breakthroughs in text processing, semantic understanding, artificial intelligence and human-computer interactions. Web scraping, instead, favors practical solutions based on existing technologies that are often entirely ad hoc. Therefore, there are different levels of automation that existing web-scraping technologies can provide:

  • Human copy-and-paste: Sometimes even the best web-scraping technology cannot replace a human’s manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly set up barriers to prevent machine automation.
  • Text grepping and regular expression matching: A simple yet powerful approach to extract information from web pages can be based on the UNIX grep command or regular expression-matching facilities of programming languages (for instance Perl or Python).
  • HTTP programming: Static and dynamic web pages can be retrieved by posting HTTP requests to the remote web server using socket programming.
  • HTML parsers. Many websites have large collections of pages generated dynamically from an underlying structured source like a database. Data of the same category are typically encoded into similar pages by a common script or template. In data mining, a program that detects such templates in a particular information source, extracts its content and translates it into a relational form called a wrapper. Wrapper generation algorithms assume that input pages of a wrapper induction system conform to a common template and that they can be easily identified in terms of a URL common scheme.[1]. Moreover, some semi-structured data query languages, such as XQuery and the HTQL, can be used to parse HTML pages and to retrieve and transform page content.
  • DOM parsing: By embedding a full-fledged web browser, such as the Internet Explorer or the Mozilla browser control, programs can retrieve the dynamic content generated by client-side scripts. These browser controls also parse web pages into a DOM tree, based on which programs can retrieve parts of the pages.
  • Web-scraping software: There are many software tools available that can be used to customize web-scraping solutions. This software may attempt to automatically recognize the data structure of a page or provide a recording interface that removes the necessity to manually write web-scraping code, or some scripting functions that can be used to extract and transform content, and database interfaces that can store the scraped data in local databases.
  • Vertical aggregation platforms: There are several companies that have developed vertical specific harvesting platforms. These platforms create and monitor a multitude of “bots” for specific verticals with no man-in-the-loop,[clarification needed] and no work related to a specific target site. The preparation involves establishing the knowledge base for the entire vertical and then the platform creates the bots automatically. The platform’s robustness is measured by the quality of the information it retrieves (usually number of fields) and its scalability (how quick it can scale up to hundreds or thousands of sites). This scalability is mostly used to target the Long Tail of sites that common aggregators find complicated or too labor-intensive to harvest content from.
  • Semantic annotation recognizing: The pages being scraped may embrace metadata or semantic markups and annotations, which can be used to locate specific data snippets. If the annotations are embedded in the pages, as Microformat does, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer,[2] are stored and managed separately from the web pages, so the scrapers can retrieve data schema and instructions from this layer before scraping the pages.
  • Computer vision web-page analyzers. There are efforts using machine learning and computer vision that attempt to identify and extract information from web pages by interpreting pages visually as a human being might.[3]

Legal issues

Web scraping may be against the terms of use of some websites. The enforceability of these terms is unclear.[4] While outright duplication of original expression will in many cases be illegal, in the United States the courts ruled in Feist Publications v. Rural Telephone Service that duplication of facts is allowable. U.S. courts have acknowledged that users of “scrapers” or “robots” may be held liable for committing trespass to chattels,[5][6] which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. The best known of these cases, eBay v. Bidder’s Edge, resulted in an injunction ordering Bidder’s Edge to stop accessing, collecting, and indexing auctions from the eBay web site. This case involved automatic placing of bids, known as auction sniping. However, in order to succeed on a claim of trespass to chattels, the plaintiff must demonstrate that the defendant intentionally and without authorization interfered with the plaintiff’s possessory interest in the computer system and that the defendant’s unauthorized use caused damage to the plaintiff. Not all cases of web spidering brought before the courts have been considered trespass to chattels.[7]

One of the first major tests of screen scraping involved American Airlines, and a firm called FareChase.[8] AA successfully obtained an injunction from a Texas trial court, stopping FareChase from selling software that enables users to compare online fares if it also searches AA’s website. The airline argued that FareChase’s websearch software trespassed on AA’s servers when it collected the publicly available data. FareChase filed an appeal in March 2003. By June, FareChase and AA agreed to settle and the appeal was dropped.[9]

Southwest Airlines has also challenged screen-scraping practices, and has involved both FareChase and another firm, Outtask, in a legal claim. Southwest Airlines charged that the screen-scraping is Illegal since it is an example of “Computer Fraud and Abuse” and has led to “Damage and Loss” and “Unauthorized Access” of Southwest’s site. It also constitutes “Interference with Business Relations”, “Trespass”, and “Harmful Access by Computer”. They also claimed that screen-scraping constitutes what is legally known as “Misappropriation and Unjust Enrichment”, as well as being a breach of the web site’s user agreement. Outtask denied all these claims, claiming that the prevailing law in this case should be US Copyright law, and that under copyright, the pieces of information being scraped would not be subject to copyright protection. Although the cases were never resolved in the Supreme Court of the United States, FareChase was eventually shuttered by parent company Yahoo!, and Outtask was purchased by travel expense company Concur.[10]

Although these are early scraping decisions, and the theories of liability are not uniform, it is difficult to ignore a pattern emerging that the courts are prepared to protect proprietary content on commercial sites from uses which are undesirable to the owners of such sites. However, the degree of protection for such content is not settled, and will depend on the type of access made by the scraper, the amount of information accessed and copied, the degree to which the access adversely affects the site owner’s system and the types and manner of prohibitions on such conduct.[11]

While the law in this area becomes more settled, entities contemplating using scraping programs to access a public web site should also consider whether such action is authorized by reviewing the terms of use and other terms or notices posted on or made available through the site. In the latest ruling in the Cvent, Inc. v. Eventbrite, Inc. In the United States district court for the eastern district of Virginia, the court ruled that the terms of use should be brought to the users’ attention In order for a browse wrap contract or license to be enforced.[12]

In the plaintiff’s web site during the period of this trial the terms of use link is displayed among all the links of the site, at the bottom of the page as most sites on the internet. This ruling contradicts the Irish ruling described below. The court also rejected the plaintiff’s argument that the browse wrap restrictions were enforceable in view of Virginia’s adoption of the Uniform Computer Information Transactions Act (UCITA)—a uniform law that many believed was in favor on common browse wrap contracting practices.[13]

Outside of the United States, in February 2006, the Danish Maritime and Commercial Court (Copenhagen) ruled that systematic crawling, indexing, and deep linking by portal site ofir.dk of real estate site Home.dk does not conflict with Danish law or the database directive of the European Union.[14]

In 2009 Facebook won one of the first copyright suits against a known web scraper. This laid the groundwork for numerous lawsuits that tie any web scraping with a direct copyright violation and very clear monetary damages. The most recent case being AP v Meltwater, where the courts stripped what is referred to as fair use on the internet.[15]

In a February 2010 case complicated by matters of jurisdiction, Ireland’s An Ard-Chúirt delivered a verdict that illustrates the inchoate state of developing case law. In the case of Ryanair Ltd v Billigfluege.de GmbH, Ireland’s High Court ruled Ryanair’s “click-wrap” agreement to be legally binding. In contrast to the findings of the United States District Court Eastern District of Virginia and those of the Danish Maritime and Commercial Court, Mr. Justice Michael Hanna ruled that the hyperlink to Ryanair’s terms and conditions was plainly visible, and that placing the onus on the user to agree to terms and conditions in order to gain access to online services is sufficient to comprise a contractual relationship. Where here may be another legal issue, of data non-valid or incorrect information, as most of data mixed with lot of “Junk” or “Spam”. Where with respect to data ware house there are: may be limitation of query interface, inconsistence output and rapid change in data by admin without any notice. [16] The decision is under appeal in Ireland’s Supreme Court, the Cúirt Uachtarach na hÉireann.[17]

In Australia, the Spam Act 2003 outlaws some forms of web harvesting, although this only applies to email addresses.[18][19]

Technical measures to stop bots

The administrator of a website can use various measures to stop or slow a bot. Some techniques include:

  • Blocking an IP address. This will also block all browsing from that address.
  • Disabling any web service API that the website’s system might expose.
  • Bots sometimes declare who they are (using user agent strings) and can be blocked on that basis (using robots.txt); ‘googlebot‘ is an example. Some bots make no distinction between themselves and a human browser.
  • Bots can be blocked by excess traffic monitoring.
  • Bots can sometimes be blocked with tools to verify that it is a real person accessing the site, like a CAPTCHA. Bots are sometimes coded to explicitly break specific Captcha patterns.
  • Commercial anti-bot services: Companies offer anti-bot and anti-scraping services for websites. A few web application firewalls have limited bot detection capabilities as well.
  • Locating bots with a honeypot or other method to identify the IP addresses of automated crawlers.
  • Using CSS sprites to display such data as phone numbers or email addresses, at the cost of accessibility to screen reader users.

http://en.wikipedia.org/wiki/Web_scraping

Related Posts On Pronk Pops

The Pronk Pops Show 207, February 10, 2014, Story 1: Democrats Lose 50 Year War On Poverty Start 100 Year War On Work: Millennial Moocher Mania — Grow The Government Shrink The Economy And Employment! — Progressive Permanent Poverty People — Videos

Read Full Post | Make a Comment ( None so far )

Democrats Lose 50 Year War on Poverty Start 100 Year War on Work: Millennial Moocher Mania — Grow The Government Shrink The Economy and Employment! — Progressive Permanent Poverty People — Videos Videos

Posted on February 10, 2014. Filed under: American History, Babies, Blogroll, Books, Business, College, Comedy, Communications, Crime, Culture, Demographics, Diasters, Economics, Education, Federal Government, Federal Government Budget, Fiscal Policy, Foreign Policy, Fraud, government spending, Health Care, history, Homes, Inflation, Investments, IRS, Language, Law, liberty, Life, Links, Literacy, media, Medicine, Obamacare, People, Philosophy, Photos, Politics, Press, Private Sector, Public Sector, Radio, Rants, Raves, Talk Radio, Tax Policy, Taxes, Unemployment, Unions, Video, Wealth, Wisdom, Writing | Tags: , , , , , , , , , , , , , , , |

Project_1

The Pronk Pops Show Podcasts

Pronk Pops Show 207: February 10, 2014

Pronk Pops Show 206: February 7, 2014

Pronk Pops Show 205: February 5, 2014

Pronk Pops Show 204: February 4, 2014

Pronk Pops Show 203: February 3, 2014

Pronk Pops Show 202: January 31, 2014

Pronk Pops Show 201: January 30, 2014

Pronk Pops Show 200: January 29, 2014

Pronk Pops Show 199: January 28, 2014

Pronk Pops Show 198: January 27, 2014

Pronk Pops Show 197: January 24, 2014

Pronk Pops Show 196: January 22, 2014

Pronk Pops Show 195: January 21, 2014

Pronk Pops Show 194: January 17, 2014

Pronk Pops Show 193: January 16, 2014

Pronk Pops Show 192: January 14, 2014

Pronk Pops Show 191: January 13, 2014

Pronk Pops Show 190: January 10, 2014

Pronk Pops Show 189: January 9, 2014

Pronk Pops Show 188: January 8, 2014

Pronk Pops Show 187: January 7, 2014

Pronk Pops Show 186: January 6, 2014

Pronk Pops Show 185: January 3, 2014

Pronk Pops Show 184: December 19, 2013

Pronk Pops Show 183: December 17, 2013

Pronk Pops Show 182: December 16, 2013

Pronk Pops Show 181: December 13, 2013

Pronk Pops Show 180: December 12, 2013

Pronk Pops Show 179: December 11, 2013

Pronk Pops Show 178: December 5, 2013

Pronk Pops Show 177: December 2, 2013

Pronk Pops Show 176: November 27, 2013

Pronk Pops Show 175: November 26, 2013

Pronk Pops Show 174: November 25, 2013

Pronk Pops Show 173: November 22, 2013

Pronk Pops Show 172: November 21, 2013

Pronk Pops Show 171: November 20, 2013

Pronk Pops Show 170: November 19, 2013

Pronk Pops Show 169: November 18, 2013

Pronk Pops Show 168: November 15, 2013

Pronk Pops Show 167: November 14, 2013

Pronk Pops Show 166: November 13, 2013

Pronk Pops Show 165: November 12, 2013

Pronk Pops Show 164: November 11, 2013

Pronk Pops Show 163: November 8, 2013

Pronk Pops Show 162: November 7, 2013

Pronk Pops Show 161: November 4, 2013

Pronk Pops Show 160: November 1, 2013

The Pronk Pops Show Podcasts Portfolio

Listen To Pronk Pops Podcast or Download Show 202-207

Listen To Pronk Pops Podcast or Download Show 194-201

Listen To Pronk Pops Podcast or Download Show 184-193

Listen To Pronk Pops Podcast or Download Show 174-183

Listen To Pronk Pops Podcast or Download Show 165-173

Listen To Pronk Pops Podcast or Download Show 158-164

Listen To Pronk Pops Podcast or Download Show 151-157

Listen To Pronk Pops Podcast or Download Show 143-150

Listen To Pronk Pops Podcast or Download Show 135-142

Listen To Pronk Pops Podcast or Download Show 131-134

Listen To Pronk Pops Podcast or Download Show 124-130

Listen To Pronk Pops Podcast or Download Shows 121-123

Listen To Pronk Pops Podcast or Download Shows 118-120

Listen To Pronk Pops Podcast or Download Shows 113 -117

Listen To Pronk Pops Podcast or Download Show 112

Listen To Pronk Pops Podcast or Download Shows 108-111

Listen To Pronk Pops Podcast or Download Shows 106-108

Listen To Pronk Pops Podcast or Download Shows 104-105

Listen To Pronk Pops Podcast or Download Shows 101-103

Listen To Pronk Pops Podcast or Download Shows 98-100

Listen To Pronk Pops Podcast or Download Shows 94-97

Listen To Pronk Pops Podcast or Download Shows 93

Listen To Pronk Pops Podcast or Download Shows 92

Listen To Pronk Pops Podcast or Download Shows 91

Listen To Pronk Pops Podcast or Download Shows 88-90

Listen To Pronk Pops Podcast or Download Shows 84-87

Listen To Pronk Pops Podcast or Download Shows 79-83

Listen To Pronk Pops Podcast or Download Shows 74-78

Listen To Pronk Pops Podcast or Download Shows 71-73

Listen To Pronk Pops Podcast or Download Shows 68-70

Listen To Pronk Pops Podcast or Download Shows 65-67

Listen To Pronk Pops Podcast or Download Shows 62-64

Listen To Pronk Pops Podcast or Download Shows 58-61

Listen To Pronk Pops Podcast or Download Shows 55-57

Listen To Pronk Pops Podcast or Download Shows 52-54

Listen To Pronk Pops Podcast or Download Shows 49-51

Listen To Pronk Pops Podcast or Download Shows 45-48

Listen To Pronk Pops Podcast or Download Shows 41-44

Listen To Pronk Pops Podcast or Download Shows 38-40

Listen To Pronk Pops Podcast or Download Shows 34-37

Listen To Pronk Pops Podcast or Download Shows 30-33

Listen To Pronk Pops Podcast or Download Shows 27-29

Listen To Pronk Pops Podcast or Download Shows 17-26

Listen To Pronk Pops Podcast or Download Shows 16-22

Listen To Pronk Pops Podcast or Download Shows 10-15

Listen To Pronk Pops Podcast or Download Shows 01-09

Story 1: Democrats Lose 50 Year War on Poverty Start 100 Year War on Work: Millennial Moocher Mania — Grow The Government Shrink The Economy and Employment! — Progressive Permanent Poverty People — Videos   Videos

entitlements

CBO_Impact_Obamacare_Employmentjob_impact

CBO_Labor_Participation_Rate

fiscal_policy_unstainablecbo_job_report

obama-economy-jobs-debt-deficit-political-cartoon-new-normal

cartoon_obamacare

obamacare_work_killer

obamacare_admitting

obamacare_impact

n0nh6p-ramirez.jobsdeathpanelobamacare_web_designercreating part time jobs

beeler_class_warfare_full

Appendix C: Labor Market Effect of Affordable Care Act: Updated Estimates

Insurance Coverage Provisions of the Affordable Care Act— CBO’s February 2014 Baseline

Table 1. CBO’s May 2013 Estimate of the Effects of the Affordable Care Act on Health Insurance Coverage

Obamacare and jobs reports: Health care law could cost more than 2 million jobs

Casey Mulligan: Eroding incentives is damaging

W.H. defends Obamacare amid CBO findings

Obamacare ACA Impact On Workforce Why Work? Special Report All Star Panel

CBO Director to Congress: Obamacare Will Reduce Unemployment Rate

Hayes Admits CBO Obamacare Report ‘Not Some Right Wing Attack’

Obama Admin On CBO Report: You’re Now Free To “Work Or Not Work”, Thanks Obamacare – Stuart Varney

CBO Director: Obamacare creates ‘disincentive’ to work

Casey Mulligan – Affordable Care and the Labor Market

Casey Mulligan, PhD, Professor of Economics, University of Chicago
“Affordable Care and the Labor Market”
October 16, 2013
MacLean Center Seminar Series 2013-2014, Ethical Issues in Health Care Reform

15 Poverty and Welfare Programs

Public Economics and Finance – Social Insurance Programs

Public Economics and Finance – Social Insurance Programs Continued and Welfare Programs

Charles Murray: Why America is Coming Apart Along Class Lines

Uncommon Knowledge: White America Is ‘Coming Apart’

In Depth with Charles Murray

Appendix C: Labor Market Effect of Affordable Care Act: Updated Estimates

Insurance Coverage Provisions of the Affordable Care Act— CBO’s February 2014 Baseline

Table 1. CBO’s May 2013 Estimate of the Effects of the Affordable Care Act on Health Insurance Coverage

The Economist Who Exposed ObamaCare

The Chicago professor examined the law’s incentives for the poor not to get a job or work harder, and this week Beltway budgeteers agreed.

By JOSEPH RAGO

In September, two weeks before the Affordable Care Act was due to launch, President Obama declared that “there’s no serious evidence that the law . . . is holding back economic growth.” As for repealing ObamaCare, he added, “That’s not an agenda for economic growth. You’re not going to meet an economist who says that that’s a number-one priority in terms of boosting growth and jobs in this country—at least not a serious economist.”

In a way, Mr. Obama had a point: “Never met him,” says economist Casey Mulligan. If the unfamiliarity is mutual, the confusion is all presidential. Mr. Mulligan studies how government choices influence the incentives and rewards for work—and many more people may recognize the University of Chicago professor as a serious economist after this week. That’s because, more than anyone, Mr. Mulligan is responsible for the still-raging furor over the Congressional Budget Office’s conclusion that ObamaCare will, in fact, harm growth and jobs.

Unaffordable_Careless_Act

Rarely are political tempers so raw over an 11-page appendix to a dense budget projection for the next decade. But then the CBO—Congress’s official fiscal scorekeeper, widely revered by Democrats and Republicans alike as the gold standard of economic analysis—reported that by 2024 the equivalent of 2.5 million Americans who were otherwise willing and able to work before ObamaCare will work less or not at all as a result of ObamaCare.

As the CBO admits, that’s a “substantially larger” and “considerably higher” subtraction to the labor force than the mere 800,000 the budget office estimated in 2010. The overall level of labor will fall by 1.5% to 2% over the decade, the CBO figures.

Mr. Mulligan’s empirical research puts the best estimate of the contraction at 3%. The CBO still has some of the economics wrong, he said in a phone interview Thursday, “but, boy, it’s a lot better to be off by a factor of two than a factor of six.”

The CBO’s intellectual conversion is all the more notable for accepting Mr. Mulligan’s premise, which is that what economists call “implicit marginal tax rates” in ObamaCare make work less financially valuable for lower-income Americans. Because the insurance subsidies are tied to income and phase out as cash wages rise, some people will have the incentive to remain poorer in order to continue capturing higher benefits. Another way of putting it is that taking away benefits has the same effect as a direct tax, so lower-income workers are discouraged from climbing the income ladder by working harder, logging extra hours, taking a promotion or investing in their future earnings through job training or education.

The CBO works in mysterious ways, but its commentary and a footnote suggest that two National Bureau of Economic Research papers Mr. Mulligan published last August were “roughly” the most important drivers of this revision to its model. In short, the CBO has pulled this economist’s arguments and analysis from the fringes to center of the health-care debate.

For his part, Mr. Mulligan declines to take too much credit. “I’m not an expert in that town, Washington,” he says, “but I showed them my work and I know they listened, carefully.”

At a February 2013 hearing he pointed out several discrepancies between the CBO’s marginal-tax-rate work and its health-care work, and, he says, “That couldn’t persist forever. There would have to be a time where they would reconcile those two approaches somehow.” More to the point, “I knew eventually it would be acknowledged that when you pay people for being low income you are going to have more low-income people.”

Mr. Mulligan thinks the CBO deserves particular credit for learning and then revising the old 800,000 number, not least because so many liberals cited it to dispute the claims of ObamaCare’s critics. The new finding might have prompted a debate about the marginal tax rates confronting the poor, but—well, it didn’t.

Instead, liberals have turned to claiming that ObamaCare’s missing workers will be a gift to society. Since employers aren’t cutting jobs per se through layoffs or hourly take-backs, people are merely choosing rationally to supply less labor. Thanks to ObamaCare, we’re told, Americans can finally quit the salt mines and blacking factories and retire early, or spend more time with the children, or become artists.

Mr. Mulligan reserves particular scorn for the economists making this “eliminated from the drudgery of labor market” argument, which he views as a form of trahison des clercs. “I don’t know what their intentions are,” he says, choosing his words carefully, “but it looks like they’re trying to leverage the lack of economic education in their audience by making these sorts of points.”

A job, Mr. Mulligan explains, “is a transaction between buyers and sellers. When a transaction doesn’t happen, it doesn’t happen. We know that it doesn’t matter on which side of the market you put the disincentives, the results are the same. . . . In this case you’re putting an implicit tax on work for households, and employers aren’t willing to compensate the households enough so they’ll still work.” Jobs can be destroyed by sellers (workers) as much as buyers (businesses).

He adds: “I can understand something like cigarettes and people believe that there’s too much smoking, so we put a tax on cigarettes, so people smoke less, and we say that’s a good thing. OK. But are we saying we were working too much before? Is that the new argument? I mean make up your mind. We’ve been complaining for six years now that there’s not enough work being done. . . . Even before the recession there was too little work in the economy. Now all of a sudden we wake up and say we’re glad that people are working less? We’re pursuing our dreams?”

The larger betrayal, Mr. Mulligan argues, is that the same economists now praising the great shrinking workforce used to claim that ObamaCare would expand the labor market.

He points to a 2011 letter organized by Harvard’s David Cutler and the University of Chicago’s Harold Pollack, signed by dozens of left-leaning economists including Nobel laureates, stating “our strong conclusion” that ObamaCare will strengthen the economy and create 250,000 to 400,000 jobs annually. (Mr. Cutler has since qualified and walked back some of his claims.)

“Why didn’t they say, no, we didn’t mean the labor market’s going to get bigger. We mean it’s going to get smaller in a good way,” Mr. Mulligan wonders. “I’m unhappy with that, to be honest, as an American, as an economist. Those kind of conclusions are tarnishing the field of economics, which is a great, maybe the greatest, field. They’re sure not making it look good by doing stuff like that.”

Mr. Mulligan’s investigation into the Affordable Care Act builds on his earlier work studying the 2009 Recovery and Reinvestment Act, aka the stimulus.

The Keynesian economists who dominate Mr. Obama’s Washington are preoccupied by demand, and their explanation for persistently high post-recession unemployment is weak demand for goods and thus demand for labor. Mr. Mulligan, by contrast, studies the supply of labor and attributes the state of the economy in large part to the expansion of the entitlement and welfare state, such as the surge in food stamps, unemployment benefits, Medicaid and other safety-net programs. As these benefits were enriched and extended to more people by the stimulus, he argues in his 2012 book “The Redistribution Recession,” they were responsible for about half the drop in work hours since 2007, and possibly more.

The nearby chart tracks marginal tax rates over time for nonelderly household heads and spouses with median earnings. This index is a population-weighted average over various ages, jobs, employment decisions like full-time versus part-time. Basically, the chart shows the extra taxes paid and government benefits foregone as a result of earning an extra dollar of income.

The stimulus caused a spike in marginal rates, but at least it was temporary. ObamaCare will bring them permanently into the 47% range, or seven percentage points higher than in early 2007. Mr. Mulligan says the main response to his calculations is that people “didn’t realize the cumulative effect of these things together as a package to discourage work.”

Mr. Mulligan is uncomfortable speculating about whether the benefits of this shift outweigh the costs. Perhaps the public was willing to trade market efficiency for more income security after the 2008 crisis. “As an economist I can’t argue with that,” he says. “The thing that I argue with is the denial that there is a trade-off. I argue with the denial that if you pay unemployed people you’re going to get more unemployed people. There are consequences of that. That doesn’t mean the consequences aren’t worth paying. But you can’t deny the consequences for the labor market.”

One major risk is slower economic growth over time as people leave the workforce and contribute less to national prosperity. Another is that social programs with high marginal rates end up perpetuating the problems they’re supposed to be alleviating.

So amid the current wave of liberal ObamaCare denial about these realities, how did Mr. Mulligan end up conducting such “unconventional” research?

“Unconventional?” he asks with more than a little disbelief. “It’s not unconventional at all. The critique I get is that it’s not complicated enough.”

Well, then how come the CBO’s adoption of his insights is causing such a ruckus?

“I would phrase the question a little differently,” Mr. Mulligan responds, “which is: Why didn’t conventional economic analysis make its way to Washington? Why was I the only delivery boy? Why wasn’t there a laundry list?” The charitable explanation, he says, is that there was “a general lack of awareness” and economists simply didn’t realize everything that government was doing to undermine incentives for work. “You have to dig into it and see it,” he explains. “The Affordable Care Act’s not going to come and shake you out of your bed and say, ‘Look what’s in me.’ ”

Judging by their reaction to the CBO report, the less charitable explanation is that liberals would have preferred that the public never found out.

Mr. Rago is a member of the Journal’s editorial board.

Lawmakers Spar Over CBO’s U.S. Health-Law Findings

Questions Over Impact on Workforce Create ‘Hysteria’ on Capitol Hill

A new report outlining the effect of the Affordable Care Act on the labor market continued to reverberate on Capitol Hill Wednesday, with lawmakers in both parties saying the findings bolstered their view of how the law would play out.

Republicans at a House Budget Committee hearing said the report, released Tuesday, shows the health law will drive people out of the work force. Democrats countered that the report shows the law will give workers flexibility to leave jobs they are locked into because of health-care benefits.

The sparring came in response to a Congressional Budget Office analysis concluding that subsidies in the law, combined with easier access to health care, would create incentives for many Americans to cut their work hours, leading to a net reduction of 1.5% to 2% from 2017 through 2024. This would be the equivalent of reducing the labor force by 2.5 million workers in 2024, the CBO found.

“The effects we estimated are almost entirely choices by people,” CBO Director Douglas Elmendorf said at the hearing. He said, for example, that the labor changes wouldn’t be driven by employers cutting jobs, but rather workers deciding to cut back on their hours to take care of their children, parents, or to pursue other interests.

The report struck a chord in Washington. Rep. Hakeem Jeffries (D., N.Y.) said at the hearing that the analysis by CBO, a nonpartisan agency that advises Congress, had caused “hysteria.”

Many Republicans said the CBO confirmed their long-held belief that the law would have a direct impact on the labor market and harm economic growth. They said it would expedite the decline in labor-force participation, which is expected to worsen in coming years as more aging Americans drop out of the work force.

“These changes—they disproportionately affect low-wage workers,” House Budget Committee Chairman Paul Ryan (R., Wis.) said. “Translation: Washington is making the poverty trap worse.”

Democrats on Wednesday said the study confirmed their belief that the law would free many Americans from a phenomenon known as “job lock,” or the idea that people don’t change their jobs for fear of losing their health benefits.

“More Americans will be able to voluntarily, choose—choose—to work fewer hours or not take a job because they don’t depend on that job any more for the provision of health insurance,” Rep. Chris Van Hollen (D., Md.) said. “Before the Affordable Care Act, if you lost your job, you lost your health insurance.”

Mr. Elmendorf stressed that the law’s impact on the labor market could be difficult to predict. He agreed, for example, with one Republican lawmaker who said that by reducing the number of hours worked by many Americans, it would reduce overall wages and lower the amount of money people paid in taxes from 2017 through 2024.

But he also agreed with a Democratic lawmaker who said the law could—in the short-term—create some new jobs by freeing up disposable income from workers who previously had to set aside money for health coverage.

The law’s impact on the labor market has drawn the focus of researchers since it was passed, in part because the law makes so many changes to health-care delivery that its broader economic impacts have proved difficult to predict.

A 2013 study by researchers at Northwestern University, Columbia University and the University of Chicago estimated the Affordable Care Act’s impact could be particularly acute, including among Americans who are near retirement and hang on to jobs to retain health care before they qualify for Medicare at age 65.

The study found the new law “creates a nonemployer option for health insurance that is going to be fairly priced for a large number of Americans, and that hasn’t been available,” said Craig Garthwaite, an assistant professor at Northwestern’s Kellogg School of Management, and one of the study’s co-authors.

But he said there is a trade-off to the broader access to health care, and said “there should be some pause for concern here about any policies that actually weaken labor-force attachment.”

http://online.wsj.com/news/articles/SB10001424052702304181204579364933406260084?mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424052702304181204579364933406260084.html

Health Law To Cut Into Labor Force

CBO Report Forecasts More People Will Opt to Work Less as They Seek Coverage Through Affordable Care Act

By LOUISE RADNOFSKY and DAMIAN PALETTA

The new health law is projected to reduce the total number of hours Americans work by the equivalent of 2.3 million full-time jobs in 2021, a bigger impact on the workforce than previously expected, according to a nonpartisan congressional report.

The analysis, by the Congressional Budget Office, says a key factor is people scaling back how much they work and instead getting health coverage through the Affordable Care Act. The agency had earlier forecast the labor-force impact would be the equivalent of 800,000 workers in 2021.

Because the CBO estimated that the changes would be a result of workers’ choices, it said the law, President Barack Obama‘s signature initiative, wouldn’t lead to a rise in the unemployment rate. But the labor-force impact could slow growth in future years, though the precise impact is uncertain.

Social programs in the United States

From Wikipedia, the free encyclopedia

The Social Security Administration, created in 1935, was the first major federal welfare agency and continues to be the most prominent.[1]

Social programs in the United States are welfare subsidies designed to aid the needs of the U.S. population. Proposals for federal programs began with Theodore Roosevelt‘s New Nationalism and expanded with Woodrow Wilson‘s New FreedomFranklin D. Roosevelt‘sNew DealJohn F. Kennedy‘s New Frontier, and Lyndon B. Johnson‘s Great Society.

The programs vary in eligibility requirements and are provided by various organizations on a federal, state, local and private level. They help to provide food, shelter, education, healthcare and money to U.S. citizens through primary and secondary education, subsidies of college education, unemployment disability insurance, subsidies for eligible low-wage workers, subsidies for housing, Supplemental Nutrition Assistance Program benefits, pensions for eligible persons and health insurance programs that cover public employees. The Social Security system is the largest and most prominent social aid program.[1][2] Medicare is another prominent program.

Not including Social Security and Medicare, Congress allocated almost $717 billion in Federal funds in 2010 plus $210 billion was allocated in state funds ($927 billion total) for means tested welfare programs in the United States–later (after 2010) expenditures are unknown but higher.[3] As of 2011, the public social spending-to-GDP ratio in the United States was below the OECD average.[4]

Total Social Security and Medicare expenditures in 2013 were $1.3 trillion, 8.4% of the $16.3 trillion GNP (2013) and 37% of the total Federal expenditure budget of $3.684 trillion.[5][6]

In addition to government expenditures private welfare spending in the United States is thought to be about 10% of the U.S. GDP or another $1.6 trillion.[7]

Analysis

Household Characteristics

[hide]Characteristics of Households by Quintile 2010[8]

Household Income
Bracket
0-20% 21-40% 41-60% 61-80% 81-100%
Earners Per Household 0.42 0.90 1.29 1.70 1.97
Marital Status
Married couples (%) 17.0 35.9 48.8 64.3 78.4
Single Parents or Single (%) 83.0 64.1 51.2 35.7 21.6
Ages of Householders
Under 35 23.3 24 24.5 21.8 14.6
36-64 years 43.6 46.6 55.4 64.3 74.7
65 years + 33.1 29.4 20.1 13.9 10.7
Work Status householders (%)
Worked Full Time (%) 17.4 44.7 61.1 71.5 77.2
Worked Part Time (%) 14.3 13.3 11.1 9.8 9.5
Did Not Work (%) 68.2 42.1 27.8 17.7 13.3
Education of Householders (%)
Less than High School 26.7 16.6 8.8 5.4 2.2
High School or some College 61.2 65.4 62.9 58.5 37.6
Bachelor’s degree or Higher 12.1 18.0 28.3 36.1 60.3
Source: U.S. Census Bureau

Social programs have been implemented to promote a variety of societal goals, including alleviating the effects of poverty on those earning or receiving low income or encountering serious medical problems, and ensuring retired people have a basic standard of living.

Unlike in Europe, Christian democratic and social democratic theories have not played a major role in shaping welfare policy in the United States.[9] Entitlement programs in the U.S. were virtually non-existent until the administration of Franklin Delano Roosevelt and the implementation of the New Deal programs in response to the Great Depression. Between 1932 and 1981, modern American liberalism dominated U.S. economic policy and the entitlements grew along with American middle class wealth.[10]

Eligibility for welfare benefits depends on a variety of factors, including gross and net income, family size, pregnancy, homelessness, unemployment, and serious medical conditions like blindness, kidney failure or AIDS.

Drug Testing for applicants

Drug testing in order for potential recipients to receive welfare has become an increasingly controversial topic. Richard Hudson, a Republican from North Carolina claims he pushes for drug screening as a matter of “moral obligation” and that testing should be enforced as a way for the United States government to discourage drug usage. [11] Others claim that ordering the needy to drug test “stereotypes, stigmatizes, and criminalizes” them without need. [12] States that currently require drug tests to be performed in order to receive public assistance include ArizonaFloridaGeorgiaMissouriOklahomaTennessee, and Utah.[13]

Demographics of TANF Recipients

A chart showing the overall decline of average monthly welfare benefits (AFDC then TANF) per recipient 1962–2006 (in 2006 dollars).[14]

Some have argued that welfare has come to be associated with poverty. Martin Gilens, assistant professor of Political Science at Yale University, argues that blacks have overwhelmingly dominated images of poverty over the last few decades and states that “white Americans with the most exaggerated misunderstandings of the racial composition of the poor are the most likely to oppose welfare”.[15][16] This perception possibly perpetuates negative racial stereotypes and could increase Americans’ opposition and racialization of welfare policies.[15]

In FY 2010, African-American families comprised 31.9% of TANF families, white families comprised 31.8%, and 30.0% were Hispanic.[17] Since the implementation of TANF, the percentage of Hispanic families has increased, while the percentages of white and black families have decreased. In FY 1997, African-American families represented 37.3% of TANF recipient families, white families 34.5%, and Hispanic families 22.5%.[18] The population as a whole is composed of 63.7% whites, 16.3% Hispanic, 12.5% African-American, 4.8% Asian and 2.9% other races.[19] TANF programs at a cost of about $20.0 billion (2013) have decreased in use as Earned Income Tax CreditsMedicaid grants, food stamps (SNAP),Supplemental Security Income (SSI), child nutrition programs (CHIP), housing assistance, Feeding Programs (WIC & CSFP) along with about 70 more programs have increase to over $700.0 billion more in 2013.[20]

Costs

In 2002, total U.S. social welfare expenditure constitutes over 35% of GDP, with purely public expenditure constituting 21%, publicly supported but privately provided welfare services constituting 10% of GDP and purely private services constituting 4% of GDP. This compared to the “welfare” states of France and Sweden where welfare spending ranges from 30% to 35% of GDP.[21][22]

The Great Recession made a large impact on welfare spending. In a 2011 article, Forbes reported, “The best estimate of the cost of the 185 federal means tested welfare programs for 2010 for the federal government alone is $717 billion, up a third since 2008, according to the Heritage Foundation. Counting state spending of about $210 million, total welfare spending for 2010 reached over $920 billion, up nearly one-fourth since 2008 (24.3%)”–and increasing fast.[23] The previous decade had seen a 60% decrease in the number of people receiving welfare benefits,[24] beginning with the passage of the Personal Responsibility and Work Opportunity Act, but spending did not decrease proportionally during that time period.

Impact of social programs

[hide]Average Incomes and Taxes
CBO Study 2009*[25]

Households
by Income
Market
Income1
Federal
Transfers 2
Income +
Transfers
Avg Federal
Tax rate %3
Federal
Taxes $4
% Federal
Taxes Pd. 5
#W6 % Net
Income7
0-20% 7,600 22,900 30,500 1.0 200 0.3 0.42 6.2
21-40% 30,100 14,800 45,000 6.8 2,900 3.8 0.90 11.1
41-60% 54,200 10,400 64,600 11.1 7,200 9.4 1.29 15.8
61-80% 86,400 7,100 93,500 15.1 14,100 18.3 1.70 21.6
80-100 218,800 6,000 224,800 23.2 51,900 67.9 1.97 47.2
Source: Congressional Budget Office Study[25]
1. Market Income = All wages, tips, incomes etc. as listed on Income tax form
2. Federal Transfers = all EITC, CTC, medicaid, food stamps (SNAP), Social Security, SSI etc. received
3. Average tax rate includes all Social Security, Medicare, income, business income, excise, etc. taxes.
4. Net Federal taxes paid in dollars
5. Percent of all federal taxes paid
6. #W = Average number of workers per household in this quintile
7. % Net Income = percentage of all national income each quintile receives after taxes and transfers.

According to the Congressional Budget Office, social programs significantly raise the standard of living for low-income Americans, particularly the elderly. The poorest 20% of American households earn a before-tax average of only $7,600 – less than half of the federal poverty line. Social programs increase those households’ before-tax income to $30,500. Social Security and Medicare are responsible for two-thirds of that increase.[25]

History

Public Health nursing made available through child welfare services, 1935.

Federal Social Welfare programs

Colonial legislatures and later State governments adopted legislation patterned after the English “poor” laws. Aid to veterans, often free grants of land, and pensions for widows and handicapped veterans, have been offered in all U.S. wars. Following World War I, provisions were made for a full-scale system of hospital and medical care benefits for veterans. By 1929, workers’ compensation laws were in effect in all but four States. These state laws made industry and businesses responsible for the costs of compensating workers or their survivors when the worker was injured or killed in connection with his or her job. Retirement programs for mainly State and local government paid teachers, police officers, and fire fighters—date back to the 19th century. All these social programs were far from universal and varied considerably from one state to another.

Prior to the Great Depression the United States had social programs that mostly centered around individual efforts, family efforts, church charities, business workers compensation, life insurance and sick leave programs along with some state tax supported social programs. The misery and poverty of the great depression threatened to overwhelm all these programs. The severe Depression of the 1930s made Federal action almost a necessity, as neither the States and the local communities, businesses and industries, nor private charities had the financial resources to cope with the growing need among the American people. Beginning in 1932, the Federal Government first made loans, then grants, to States to pay for direct relief and work relief. After that, special Federal emergency relief like the Civilian Conservation Corps and other public works programs were started. In 1935, President Franklin D. Roosevelt‘s administration proposed to Congress federal social relief programs and a federally sponsored retirement program. Congress followed by the passage of the 37 page Social Security Act, signed into law August 14, 1935 and “effective” by 1939–just as World War II began. This program was expanded several times over the years.

War on Poverty and Great Society programs (1960s)

Further information: War on Poverty and Great Society

After the Great Society legislation of the 1960s, for the first time a person who was not elderly or disabled could receive need-based aid from the federal government.[26][dubious – discuss] Aid could include general welfare payments, health care through Medicaidfood stamps, special payments for pregnant women and young mothers, and federal and state housing benefits.[26]

In 1968, 4.1% of families were headed by a woman receiving welfare assistance; by 1980, the percentage increased to 10%.[26] In the 1970s, California was the U.S. state with the most generous welfare system.[27] Virtually all food stamp costs are paid by the federal government.[28] In 2008, 28.7 percent of the households headed by single women were considered poor.[29]

Welfare reform (1990s)

Before the Welfare Reform Act of 1996, welfare assistance was “once considered an open-ended right,” but welfare reform converted it “into a finite program built to provide short-term cash assistance and steer people quickly into jobs.”[30] Prior to reform, states were given “limitless”[30] money by the federal government, increasing per family on welfare, under the 60-year-old Aid to Families with Dependent Children (AFDC) program.[31] This gave states no incentive to direct welfare funds to the neediest recipients or to encourage individuals to go off welfare benefits (the state lost federal money when someone left the system).[32] Nationwide, one child in seven received AFDC funds,[31] which mostly went to single mothers.[28]

In 1996, under the Bill Clinton administrationCongress passed the Personal Responsibility and Work Opportunity Reconciliation Act, which gave more control of the welfare system to the states though there are basic requirements the states need to meet with regards to welfare services. Still, most states offer basic assistance, such as health care, food assistance, child care assistance, unemployment, cash aid, and housing assistance. After reforms, which President Clinton said would “end welfare as we know it,”[28]amounts from the federal government were given out in a flat rate per state based on population.[32]

Each state must meet certain criteria to ensure recipients are being encouraged to work themselves out of welfare. The new program is called Temporary Assistance for Needy Families (TANF).[31] It encourages states to require some sort of employment search in exchange for providing funds to individuals, and imposes a five-year lifetime limit on cash assistance.[28][24][31] The bill restricts welfare from most legal immigrants and increased financial assistance for child care.[24] The federal government also maintains an emergency $2 billion TANF fund to assist states that may have rising unemployment.[31]

President Bill Clinton signing welfare reform legislation.

Following these changes, millions of people left the welfare rolls (a 60% drop overall),[24] employment rose, and the child poverty rate was reduced.[28] A 2007 Congressional Budget Office study found that incomes in affected families rose by 35%.[24] The reforms were “widely applauded”[33] after “bitter protest.”[28] The Times called the reform “one of the few undisputed triumphs of American government in the past 20 years.”[34]

Critics of the reforms sometimes point out that the massive decrease of people on the welfare rolls during the 1990s wasn’t due to a rise in actual gainful employment in this population, but rather, was due almost exclusively to their offloading into workfare, giving them a different classification than classic welfare recipient. The late 1990s were also considered an unusually strong economic time, and critics voiced their concern about what would happen in an economic downturn.[28]

National Review editorialized that the Economic Stimulus Act of 2009 will reverse the welfare-to-work provisions that Bill Clinton signed in the 1990s, and will again base federal grants to states on the number of people signed up for welfare rather than at a flat rate.[32] One of the experts who worked on the 1996 bill said that the provisions would lead to the largest one-year increase in welfare spending in American history.[34] The House bill provides $4 billion to pay 80% of states’ welfare caseloads.[31] Although each state received $16.5 billion annually from the federal government as welfare rolls dropped, they spent the rest of the block grant on other types of assistance rather than saving it for worse economic times.[30]

[hide]Spending on largest Welfare Programs
Federal Spending 2003-2013*[35]

Federal
Programs
Spending
2003*
Spending
2013*
Medicaid Grants to States $201,389 $266,565
Food Stamps (SNAP) 61,717 82,603
Earned Income Tax Credit (EITC) 40,027 55,123
Supplemental Security Income (SSI) 38,315 50,544
Housing assistance 37,205 49,739
Child Nutrition Program (CHIP) 13,558 20,842
Support Payments to States, TANF 28,980 20,842
Feeding Programs (WIC & CSFP) 5,695 6,671
Low Income Home Energy Assistance 2,542 3,704
Notes:
* Spending in millions of dollars

Timeline

The following is a short timeline of welfare in the United States:[36]

1880s–1890s: Attempts were made to move poor people from work yards to poor houses if they were in search of relief funds.

1893–1894: Attempts were made at the first unemployment payments, but were unsuccessful due to the 1893–1894recession.

1932: The Great Depression had gotten worse and the first attempts to fund relief failed. The “Emergency Relief Act”, which gave local governments $300 million, was passed into law.

1933: In March 1933, President Franklin D. Roosevelt pushed Congress to establish the Civilian Conservation Corps.

1935: The Social Security Act was passed on June 17, 1935. The bill included direct relief (cash, food stamps, etc.) and changes for unemployment insurance.

1940: Aid to Families With Dependent Children (AFDC) was established.

1964: Johnson’s War on Poverty is underway, and the Economic Opportunity Act was passed. Commonly known as “the Great Society

1996: Passed under Clinton, the “Personal Responsibility and Work Opportunity Reconciliation Act of 1996″ becomes law.

2013: Affordable Care Act goes into effect with large increases in Medicaid and subsidized medical insurance premiums go into effect.

Types of social programs

Means tested Social Programs

[hide]79 Means Tested Programs in U.S. (2011)[37]

Programs Federal
Spending*
State
Spending*
Total
Spending*
TOTAL cost in (billions) (2011) $717 $210 $927
Social Security OASDI (2013) $785
Medicare(2013) $574
TOTAL all programs (billions) $2,287
============================ ======= ====== ======
CASH ASSISTANCE (millions)
SSI/Old Age Assistance 56,462.00 4,673.00 61,135.00
Earned Income Tax Credit
(refundable portion)
55,652.00 55,652.00
Refundable Child Credit 22,691.00 22,691.00
Make Work Pay Tax Credit
(Refundable Portion)
13,905.00 13,905.00
Temporary Assistance for Needy Families
(TANF, old AFDC)
6,882.89 6,876.86 13,759.74
Foster Care Title IVE 4,456.00 3,921.28 8,377.28
Adoption Assistance Title IVE 2,362.00 1,316 3,678.00
General Assistance Cash 2,625.00 2,625.00
Refugee Assistance 167.86 167.86
General Assistance to Indians 115.00 115.00
Assets for Independence 24.00 24.00
CASH TOTAL 162,717.75 19,412.14 182,129.88
MEDICAL
Medicaid 274,964.00 157,600.00 432,564.00
SCHIP State Supplemental
Health Insurance Program
8,629.00 3,796.76 12,425.76
Medical General Assistance 6,965.90 6,965.90
Consolidated Health Center
/Community Health Centers
1,481.00 1,481.00
Maternal & Child Health 656.00 492.00 1,148.00
Medical Assistance to Refugees 167.86 167.86
Healthy Start 104.00 104.00
MEDICAL TOTAL 289,816.86 168,854.66 458,671.52
FOOD
Food Stamps, SNAP 77,637.00 6,987.33 84,624.33
School Lunch Program 10,321.00 10,321.00
WIC Women, Infant and
Children Food Program
6,787.00 6,787.00
School Breakfast 3,076.00 3,076.00
Child Care Food Program 2,732.00 2,732.00
Nutrition Program for the
Elderly, Nutrition Service Incentives
820.00 139.40 959.40
Summer Program 376.00 376.00
Commodity Supplemental Food Program 196.00 196.00
TEFAP Temporary
Emergency Food Program
247.00 247.00
Needy Families 60.00 60.00
Farmers’ Market Nutrition Program 23.00 23.00
Special Milk Program 13.00 13.00
FOOD TOTAL 102,288.00 7,126.73 109,414.73
HOUSING
Section 8 Housing (HUD) 28,435.00 28,435.00
Public Housing (HUD) 8,973.00 8,973.00
Low Income Housing
Tax Credit for Developers
6,150.00 6,150.00
Home Investment
Partnership Program (HUD)
2,853.00 2,853.00
Homeless Assistance
Grants (HUD)
2,280.00 2,280.00
State Housing Expenditures (from SWE) 2,085.00 2,085.00
Rural Housing Insurance
Fund (Agriculture)
1,689.00 1,689.00
Rural Housing
Service (Agriculture)
1,085.00 1,085.00
Housing for the Elderly (HUD) 934.00 934.00
Native American
Housing Block Grants (HUD)
854.00 854.00
Other Assisted Housing
Programs (HUD)
496.00 496.00
Housing for Persons
with Disabilities (HUD)
309.00 309.00
HOUSING TOTAL 54,058.00 2,085.00 56,143.00
ENERGY AND UTILITIES
LIHEAP Low Income Home
Energy Assistance
4,419.00 4,419.00
Universal Service Fund
Subsidized Low Income Phone Service
1,750.00 1,750.00
Weatherization 234.00 234.00
ENERGY AND UTILITIES TOTAL 6,403.00 6,403.00
EDUCATION
Pell Grants 41,458.00 41,458.00
Title One Grants to
Local Education Authorities
14,472.00 14,472.00
21st Century Learning Centers 1,157.00 1,157.00
Special Programs for
Disadvantaged (TRIO)
883.00 883.00
Supplemental Education
Opportunity Grants
740.00 740.00
Adult Basic Education Grants 607.00 607.00
Migrant Education 444.00 444.00
Gear-Up 303.00 303.00
LEAP Formerly State Student
Incentive Grant Program (SSIG)
1.00 1.00
Education for Homeless
Children and Youth
65.00 65.00
Even Start 4.00 4.00
Aid for Graduate and Professional
Study for Disadvantaged and Minorities
41.00 41.00
EDUCATION TOTAL 60,175.00 60,175.00
TRAINING
TANF Work Activities and Training 2,504.90 831.93 3,336.83
Job Corps 1,659.00 1,659.00
WIA Youth Opportunity Grants
Formerly Summer Youth Employment
946.00 946.00
Senior Community Service Employment 705.00 77.55 782.55
WIA Adult Employment and Training
formerly JTPA IIA Training for
Disadvantaged Adults & Youth
766.00 766.00
Food Stamp Employment
and Training Program
393.00 166.00 559.00
Foster Grandparents 104.00 10.40 114.40
YouthBuild 110.00 110.00
Migrant Training 85.00 85.00
Native American Training 52.00 52.00
TRAINING TOTAL 7,324.90 1,085.88 8,410.78
SERVICES
TANF Block Grant Services 5,385.12 4,838.13 10,223.25
Title XX Social Services Block Grant 1,787.00 1,787.00
Community Service Block Grant 678.00 678.00
Social Services for
Refugees Asylees and Humanitarian Cases
417.28 417.28
Safe and Stable Families 553.00 553.00
Title III Aging Americans Act 369.00 369.00
Legal Services Block Grant 406.00 406.00
Family Planning 298.00 298.00
Emergency Food and Shelter Program 48.00 48.00
Healthy Marriage and
Responsible Fatherhood Grants
50.00 150.00
Independent Living (Chafee
Foster Care Independence Program)
140.00 28.00 168.00
Independent Living Training Vouchers 45.00 45.00
Maternal, Infants and
Children Home Visitation
36.00 36.00
SERVICES TOTAL 10,411.40 4,866.13 15,277.53
CHILD CARE AND CHILD DEVELOPMENT
Headstart 7,559.0 1,889.75 9,448.75
Childcare and
Child Development Block Grant
2,984 2,176.00 5,160.00
Childcare Entitlement to the States 3,100.00 3,100.00
TANF Block Grant Child Care 2,318.56 2,643.78 4,962.35
CHILD CARE & CHILD DEVELOPMENT TOTAL 15,961.56 6,709.53 22,671.10
COMMUNITY DEVELOPMENT
Community Development Block Grant
and Related Development Funds
7,445.00 7,445.00
Economic Development
Administration (Dept. of Commerce)
423.00 423.00
Appalachian Regional Development 68.00 68.00
Empowerment Zones,
Enterprise Communities Renewal
1.00 1.00
COMMUNITY DEVELOPMENT TOTAL 7,937.00 7,937.00
TOTAL in millions (2011) $717,093.48 $210,140.07 $927,233.55
Social Security OASDI (2013) $785,700
Medicare(2013) $574,200
TOTAL in millions $2,287,133
Notes:
* Spending in millions of dollars
2.3 Trillion Dollar Total of Social Security, Medicare and Means Tested Welfare
is low since latest 2013 means tested data not available but 2013
“real” TOTAL will be higher

Social security

The Social Security program mainly refers to the Old Age, Survivors, and Disability Insurance (OASDI) program, and possibly the unemployment insurance program. Retirement Insurance Benefits (RIB), also known as Old-age Insurance Benefits, are a form of social insurance payments made by the U.S. Social Security Administration paid based upon the attainment old age (62 or older).

Social Security Disability Insurance (SSD or SSDI) is a federal insurance program that providesincome supplements to people who are restricted in their ability to be employed because of a notable disability.

Unemployment insurance, also known as unemployment compensation, provides for money, from the United States and the state collected from employers, to workers who have become unemployed through no fault of their own. The unemployment benefits are run by each state with different state defined criteria for duration, percent of income paid, etc.. Nearly all require the recipient to document their search for employment to continue receiving benefits. Extensions of time for receiving benefits are sometimes offered for extensive work unemployment. These extra benefits are usually in the form of loans from the federal government that have to be repaid by each state.

General welfare

The Supplemental Security Income (SSI) program provides stipends to low-income people who are either aged (65 or older), blind, or disabled.

The Temporary Assistance for Needy Families (TANF) provides cash assistance to indigent American families with dependent children.

Healthcare spending

Health care in the United States is provided by many separate legal entities. Health care facilities are largely owned and operated by the private sectorHealth insurance in the United States is now primarily provided by the government in the public sector, with 60–65% of healthcare provision and spending coming from programs such as Medicare, Medicaid,TRICARE, the Children’s Health Insurance Program, and the Veterans Health Administration.

Medicare is a social insurance program administered by the United States government, providing health insurance coverage to people who are aged 65 and over; to those who are under 65 and are permanently physically disabled or who have a congenital physical disability; or to those who meet other special criteria like the End Stage Renal Disease program (ESRD). Medicare in the United States somewhat resembles a single-payer health care system but is not. Before Medicare, only 51% of people aged 65 and older had health care coverage, and nearly 30% lived below the federal poverty level.

Medicaid is a health program for certain people and families with low incomes and resources. It is a means-tested program that is jointly funded by the state and federal governments, and is managed by the states.[38] People served by Medicaid are U.S. citizens or legal permanent residents, including low-income adults, their children, and people with certain disabilities. Poverty alone does not necessarily qualify someone for Medicaid. Medicaid is the largest source of funding for medical and health-related services for people with limited income in the United States.

The Children’s Health Insurance Program (CHIP) is a program administered by the United States Department of Health and Human Services that provides matching funds to states for health insurance to families with children.[39] The program was designed to cover uninsured children in families with incomes that are modest but too high to qualify for Medicaid.

The Alcohol, Drug Abuse, and Mental Health Services Block Grant (or ADMS Block Grant) is a federal assistance block grant given by the United States Department of Health and Human Services.

Education spending

University of California, Berkeley is one of the oldest public universities in the U.S.

Per capita spending on tertiary education is among the highest in the world[citation needed]. Public education is managed by individual states, municipalities and regional school districts. As in all developed countries, primary and secondary education is free, universal and mandatory. Parents do have the option of home-schooling their children, though some states, such as California (until a 2008 legal ruling overturned this requirement[40]), require parents to obtain teaching credentials before doing so. Experimental programs give lower-income parents the option of using government issued vouchers to send their kids to private rather than public schools in some states/regions.

As of 2007, more than 80% of all primary and secondary students were enrolled in public schools, including 75% of those from households with incomes in the top 5%. Public schools commonly offer after-school programs and the government subsidizes private after school programs, such as the Boys & Girls Club. While pre-school education is subsidized as well, through programs such as Head Start, many Americans still find themselves unable to take advantage of them. Some education critics have therefore proposed creating a comprehensive transfer system to make pre-school education universal, pointing out that the financial returns alone would compensate for the cost.

Tertiary education is not free, but is subsidized by individual states and the federal government. Some of the costs at public institutions is carried by the state.

The government also provides grants, scholarships and subsidized loans to most students. Those who do not qualify for any type of aid, can obtain a government guaranteed loan and tuition can often be deducted from the federal income tax. Despite subsidized attendance cost at public institutions and tax deductions, however, tuition costs have risen at three times the rate of median household income since 1982.[41] In fear that many future Americans might be excluded from tertiary education, progressive Democrats have proposed increasing financial aid and subsidizing an increased share of attendance costs. Some Democratic politicians and political groups have also proposed to make public tertiary education free of charge, i.e. subsidizing 100% of attendance cost.[citation needed]

Food assistance

In the U.S., financial assistance for food purchasing for low- and no-income people is provided through the Supplemental Nutrition Assistance Program (SNAP), formerly known as the Food Stamp Program.[42] This federal aid program is administered by the Food and Nutrition Serviceof the U.S. Department of Agriculture, but benefits are distributed by the individual U.S. states. It is historically and commonly known as the Food Stamp Program, though all legal references to “stamp” and “coupon” have been replaced by “EBT” and “card,” referring to the refillable, plastic Electronic Benefit Transfer (EBT) cards that replaced the paper “food stamp” coupons. To be eligible for SNAP benefits, the recipients must have incomes below 130 percent of the poverty line, and also own few assets.[43] Since the economic downturn began in 2008, the use of food stamps has increased.[43]

The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) is a child nutrition program for healthcare and nutrition of low-income pregnant women, breastfeeding women, and infants and children under the age of five. The eligibility requirement is a family income below 185% of the U.S. Poverty Income Guidelines, but if a person participates in other benefit programs, or has family members who participate in SNAP, Medicaid, or Temporary Assistance for Needy Families, they automatically meet the eligibility requirements.

The Child and Adult Care Food Program (CACFP) is a type of United States Federal assistance provided by the U.S. Department of Agriculture (USDA) to states in order to provide a daily subsidized food service for an estimated 3.2 million children and 112,000 elderly or mentally or physically impaired adults[44] in non-residential, day-care settings.[45]

Public housing

The Housing and Community Development Act of 1974 created Section 8 housing, the payment of rent assistance to private landlords on behalf of low-income households.

See also

General:

References

  1. Jump up to:a b Krugman, P. (2007). The Conscience of a Liberal. New York: W. W. Norton
  2. Jump up^ Feldstein, M. (2005). Rethinking social insurance. American Economic Review, 95(1), pp. 1–24.
  3. Jump up^ Means tested programs [1] accessed 19 Nov 2013
  4. Jump up^ Social spending after the crisis. OECD. (Social spending in a historical perspective, Pg. 5). Retrieved: 26 December 2012.
  5. Jump up^ 2013 Status Of The Social Security And Medicare Programs [2] accessed 16 Oct 2013
  6. Jump up^ White house Historical tables. Table 1 [3] accessed 16 Oct 2013
  7. Jump up^ OECD database on social expenditures[4] accessed 9 Dec 2013
  8. Jump up^ Characteristics if Households by Quintile 2010 [5] accessed 19 Nov 2013
  9. Jump up^ Esping-Andersen, G. (1991). The Three Worlds of Welfare Capitalism. Princeton, NJ: Princeton University Press.
  10. Jump up^ by G. William Domhoff. “Who Rules America: Wealth, Income, and Power”. Sociology.ucsc.edu. Retrieved 2012-08-14.
  11. Jump up^ Delaney, Arthur. “Food Stamp Cuts Might Come With Drug Testing”. Huffington Post.
  12. Jump up^ Goetzl, Celia. “Government Mandated Drug Testing for Welfare Recipients: Special Need or Unconstitutional Condition?”. Retrieved October 24, 2013.
  13. Jump up^ Cohen, Robin. “Drug Testing of Public Assistance Recipients”. OLR Research Report. Retrieved October 24, 2013.
  14. Jump up^ 2008 Indicators of Welfare Dependence Figure TANF 2.
  15. Jump up to:a b Gilens, Martin (1996). “Race and Poverty in America: Public Misperceptions and the American News Media.” Public Opinion Quarterly 60, no. 4, pp. 515–541.
  16. Jump up^ Gilens, Martin (1996). “Race and Poverty in America: Public Misperceptions and the American News Media.” Public Opinion Quarterly 60, no. 4, p. 516
  17. Jump up^ “Characteristics and Financial Circumstances of TANF Recipients – Fiscal Year 2010“. United States Department of Health and Human Services.
  18. Jump up^ “Demographic And Financial Characteristics Of Families Receiving Assistance“. United States Department of Health and Human Services.
  19. Jump up^ Demographics of U.S. population Table 1[6] accessed 26 Dec 2013
  20. Jump up^ 79 Means tested welfare programs in the United States[7] accessed 26 Dec 2013
  21. Jump up^ Alber, J. (1988). Is There a Crisis of the Welfare State? Cross-National Evidence from Europe, North America, and Japan. European Sociological Review, 4(3), 181–207.
  22. Jump up^ Hacker, J. S. (2002). The Divided Welfare State. New York: Cambridge University Press, USA.
  23. Jump up^ Ferrara, Peter (2011-04-22). “America’s Ever Expanding Welfare Empire”Forbes. Retrieved 2012-04-10.
  24. Jump up to:a b c d e Goodman, Peter S. (2008-04-11). “From Welfare Shift in ’96, a Reminder for Clinton”The New York Times. Retrieved 2009-02-12.
  25. Jump up to:a b c Average Incomes and Taxes 2009 [8] accessed 19 Nov 2013
  26. Jump up to:a b c Frum, David (2000). How We Got Here: The ’70s. New York, New York: Basic Books. p. 72. ISBN 0-465-04195-7.
  27. Jump up^ Frum, David (2000). How We Got Here: The ’70s. New York, New York: Basic Books. p. 325.ISBN 0-465-04195-7.
  28. Jump up to:a b c d e f g Deparle, Jason (2009-02-02). “Welfare Aid Isn’t Growing as Economy Drops Off”The New York Times. Retrieved 2009-02-12.
  29. Jump up^ NPC.umich.edu
  30. Jump up to:a b c “Welfare Rolls See First Climb in Years”The Washington Post. 2008-12-17. Retrieved 2009-02-13.
  31. Jump up to:a b c d e f “Stimulus Bill Abolishes Welfare Reform and Adds New Welfare Spending”.Heritage Foundation. 2009-02-11. Retrieved 2009-02-12.
  32. Jump up to:a b c “Ending Welfare Reform as We Knew It”The National Review. 2009-02-12. Retrieved 2009-02-12.[dead link]
  33. Jump up^ “Change for the Worse”New York Post. 2009-01-30. Retrieved 2009-02-12.[dead link]
  34. Jump up to:a b AllenMills, Tony (2009-02-15). “Obama warned over ‘welfare spendathon'”The Times(London). Retrieved 2009-02-15.
  35. Jump up^ Spending on largest Welfare Programs in U.S. [9] accessed 19 Nov 2013
  36. Jump up^ “Welfare Reform History Timeline – 1900s to current United States.” SearchBeat. Web. 12 Oct. 2009. <http://society.searchbeat.com/welfare9.htm>.
  37. Jump up^ Means Tested Programs in U.S. [10] accessed 19 Nov 2013
  38. Jump up^ Medicaid General Information from the Centers for Medicare and Medicaid Services . (CMS) website
  39. Jump up^ Sultz, H., & Young, K. Health Care USA Understanding its Organization and Delivery pg. 257
  40. Jump up^ Jonathan L. v. Superior Court, 165 Cal.App.4th 1074 (Cal.App. 2 Dist. 2008). Text of opinion
  41. Jump up^ Lewin, Tamar. “NYT on increase in tuition”The New York Times. Retrieved 2009-01-15.
  42. Jump up^ “Nutrition Assistance Program Home Page”, U.S. Department of Agriculture (official website), March 3, 2011 (last revised). Accessed March 4, 2011.
  43. Jump up to:a b Erik Eckholm (March 31, 2008). “Food stamp use in U.S. at record pace as jobs vanish”The New York Times. Retrieved January 30, 2012.
  44. Jump up^ Why CACFP Is Important, Child and Adult Care Food Program Homepage, Food and Nutrition Service, US Department of Agriculture
  45. Jump up^ Child and Adult Care Food Program (CFDA 10.558);OMB Circular A-133 Compliance Supplement; Part 4: Agency Program Requirements: Department of Housing and Urban Development, pg. 4-10.558-1

Further reading

http://en.wikipedia.org/wiki/Social_programs_in_the_United_States

Related Posts On Pronk Pops

The Pronk Pops Show 207, February 10, 2014, Story 2: The Pronk Pops Show 207, February 10, 2014, Story 1: Snowden Used Automated Web Crawler To Scrap Data From Over 1.7 Million Restricted National Security Agency Files — Videos

Read Full Post | Make a Comment ( None so far )

Liked it here?
Why not try sites on the blogroll...