Little Boxes – Walk off the Earth
Obama On The Common Core Standards
President Obama, Secretary Duncan Announce Race to the Top
The Bottom Line :Education Database
Why We Need Common Core: “I choose C.”
Common Core Lesson Plans – Rated very funny!
Common Core Standards & Forming a Professional Learning Network (PLN)
Vision of the Common Core
Common Core State Standards: Principles of Development
General Session: Common Core State Standards
Moderator: Governor Jeb Bush, Governor of Florida from 1999-2007 and Chairman of the Foundation for Excellence in Education
Panelists: David Coleman, President and CEO of the College Board Bob Corcoran, President and Chairman of the GE Foundation Dr. William Schmidt, University Distinguished Professor and Co-Director of the Education Policy Center at Michigan State University, Minnesota State Representative
The Common Core State Standards and What’s Next for Higher Education | College Board Forum 2012
P20 Statewide Longitudinal Data System
Indoctrination And The Progressive Future – TheBlazeTV – The Glenn Beck Program – 2013.03.27
Data Mining In Common Core – TheBlazeTV – The Glenn Beck Program
Urgent Message On Common Core – TheBlazeTV – The Glenn Beck Radio Program – 2013.03.28
Part 1 of 5 Stop the Common Core
Part 2 of 5 Stop the Common Core
Part 3 of 5 Stop the Common Core
Part 4 of 5 Stop the Common Core
Part 5 of 5 Stop the Common Core
The Government will Control Your Childs Every Move? Common Core Disaster?
The Glenn Beck Program – Air Date: Thursday, March 14, 2013
Rick Hess: Common Core as one more Obama initiative
Teacher Talk episode: Common Core State Standards
Learn the Common Core Standards in 10 Minutes
Common Core Curriculum Standards
Common Core Standards Overview | LiteracyTA
Common Core Standards- Mathematics by David Foster
Two Moms Against Common Core
Neal McCluskey: The Folly of Common Core Curricula
Pete Seeger – What Did You Learn In School?
Conservative Groups Oppose National ‘Common Core’ as an Intrusion on States
By STEPHANIE BANCHERO
The Common Core national math and reading standards, adopted by 46 states and the District of Columbia two years ago, are coming under attack from some quarters as a federal intrusion into state education matters.
The voluntary academic standards, which specify what students should know in each grade, were heavily promoted by the Obama administration through its $4.35 billion Race to the Top education-grant competition. States that instituted changes such as common learning goals received bonus points in their applications.
Supporters say the Common Core standards better prepare students for college or the workforce, and are important as the U.S. falls behind other nations in areas such as math proficiency.
A 2010 report from the Thomas B. Fordham Institute, a right-leaning educational-research group, said the Common Core standards “are clearly superior to those currently in use in 39 states in math and 37 states in English. For 33 states, the Common Core is superior in both math and reading.”
But conservative lawmakers and governors in at least five states, including Utah and Alabama, recently have been pushing to back out, or slow down implementation, of Common Core. They worry that adoption of the standards has created a de facto national curriculum that could at some point be extended into more controversial areas such as science.
Critics argue that the standards are weak and could, for example, de-emphasize literature in favor of informational texts, such as technical manuals. They also dislike that the standards postpone teaching algebra until ninth grade from the current eighth grade in many schools.
A study released this year by a researcher at the Brookings Institution think tank projected Common Core will have no effect on student achievement. The study said states with high standards improved their national math and reading scores at the same rate as states with low standards from 2003 to 2009.
But mainly, critics of Common Core object to what they see as the federal government’s involvement in local-school matters.
“The Common Core takes education out of the hands of South Carolina and parents, so we have no control over what happens in the classroom,” said Michael Fair, a Republican state senator who plans to introduce a measure that would bar his state from spending money on activities related to the standards, such as training teachers and purchasing textbooks.
South Carolina Gov. Nikki Haley, who took office after the state adopted Common Core, wrote in a letter to Mr. Fair that the state should not “relinquish control of education to the federal government, neither should we cede it to the consensus of other states.”
Common Core could take another hit Friday when the 23-member board of the American Legislative Exchange Council, a group of more than 2,000 state lawmakers and business members who back limited government and free markets, among other conservative goals, is set to vote on a resolution to formally oppose the standards. The resolution was passed by the ALEC education task force in December.Model legislation often is drafted from the group’s resolutions and taken by ALEC members to their state legislatures.
Common Core evolved from a drive by the National Governors Association and the Council of Chief State School Officers to delineate world-class skills students should possess. The standards, created with funding from, among others, the Bill & Melinda Gates Foundation, set detailed goals, such as first graders should understand place values in math and eighth graders should know the Pythagorean Theorem.
“We brought the best minds in the country together to create international benchmarks that, once mastered, would make our students more competitive, globally,” said Gene Wilhoit, executive director of the Council of Chief State School Officers. He said his group has no plans to create national science standards.
As the standards were being developed, the Obama administration launched Race to the Top in July 2009, which awarded points to states that adopted “a common set of K-12 standards” that are “substantially identical across all states in a consortium,” according to the grant’s policies. The department didn’t specifically mention Common Core, but it was the only common set of standards being developed.
As a result, most state’s legislatures or state boards of education adopted Common Core.
The standards have yet to show up in many classrooms as states are just beginning to implement them. But in Kentucky, where Common Core rolled out this school year, teachers are altering instruction and searching for new classroom reading materials.
Jahn Owens, a teacher in Owensboro, Ky., said the more rigorous standards require her to teach her fifth-graders how to multiply and divide fractions. Previously, that was taught in sixth grade. First-grade teacher Heidi Dees has added more nonfiction books to her classroom.
“These standards take students much deeper into the subjects and force them to do more critical thinking,” Ms. Owens said. “It’s been hard work for the teachers because the implementation was so quick, but we are now more purposeful about student learning.”
The Obama administration has awarded more than $360 million to two groups to create student assessments aligned to Common Core.
Wireless Generation, an education-technology company owned by News Corp., which also owns The Wall Street Journal, recently purchased Intel-Assess, a company that creates student assessments aligned to Common Core.
Justin Hamilton, a spokesman for the U.S. Department. of Education, called Common Core a “game changer” but said the administration didn’t force states to adopt it. “A bipartisan group of governors created these standards and states collectively adopted them,” he said.
But Emmett McGroarty, executive director of American Principles in Action, a conservative lobbying group that wrote the ALEC resolution, said states were “herded” into adopting the standards with no time to deliberate on their worth. He called the standards “mediocre” and costly to implement.
The Common Core Curriculum
National education standards that even conservatives can love.
By Chester E. Finn, Jr. & Michael J. Petrilli
After votes yesterday in Massachusetts and the District of Columbia, 28 states have now embraced the new “Common Core” standards for primary and secondary education. Already, a majority — including red states such as South Carolina, Utah, and Oklahoma — have declared that they will use Common Core English and math standards in their public schools. Yet this profound, and we think positive, shift in American education is occurring with little outcry from the right, save for a half-dozen libertarians who don’t much care for government to start with. How come?
It certainly helps that the new standards were created by a voluntary partnership of 48 states, not by the federal government. But it’s also true that the Common Core standards are remarkably strong, vastly better than the standards most states have developed independently over the past 15 years. Yesterday, our institute released a 370-page study that finds the Common Core standards to be clearly superior to the existing English standards of 37 states and the existing math standards of 39.
One reason the Common Core fared so well is that its authors eschewed the vague and politically correct nonsense that infected so many state standards (and earlier attempts at national standards). They expect students to master arithmetic and memorize their times tables; they promote the teaching of phonics in the early grades; they even expect all students to read and understand the country’s founding documents. The new standards aren’t perfect. Our reviewers found three jurisdictions that did better in English (California, Indiana, and — believe it or not — the District of Columbia), mostly because they better distinguish among different “genres” of literature and other writing. Another dozen states (including Massachusetts) are “too close to call,” meaning that their standards are about equal in content and rigor to the Common Core. But anybody worried that this national effort will dumb down what we expect young Americans to learn in school can relax, at least for now.
Anxiety will surely rise when school kids across the land begin (three or four years hence) to take tests linked to these standards, and even more when those test results start to determine promotion from fifth to sixth grade or graduation from high school. (The development of those tests will soon start, aided by $350 million of federal stimulus funds.) But without tests and results-based accountability, along with solid curricula, quality textbooks, and competent teaching, standards alone have no traction in real classrooms. Adopting good standards is like having a goal for your cholesterol; it doesn’t mean you will actually eat a healthy diet or live longer.
When high expectations for schools and students are combined with smart implementation in thousands of classrooms, policymakers can move mountains. That’s the lesson we take from Massachusetts, which has established high standards, well-designed assessments, a tough-minded (yet humane) accountability system, rigorous certification requirements for teachers, and a high bar that students must clear to earn their diplomas. The Bay State has been making steady achievement gains in reading and math in both fourth and eighth grades. That, of course, is why Massachusetts politicians and policymakers sparred over the proposal by state education commissioner Mitchell Chester to replace the state’s standards and tests with the new national versions.
Until now, however, the vast majority of states have failed to adopt rigorous standards, much less to take actions geared to boosting pupil achievement. In 2007, we published a comparison of states’ “proficiency” expectations under the federal No Child Left Behind Act. The results were dismaying: In some places, students could score below the tenth percentile nationally and still be considered “proficient.” In other locales, they had to reach the 77th percentile to wear the same label. And it wasn’t just that expectations varied, but that they varied almost randomly from place to place, grade to grade, and year to year.
Most Americans understand that this is not the way a big, modernized country on a competitive planet should operate its education system. Three years ago, an Education Next poll asked whether people favored “a single national standard and a single national test for all students in the United States? Or do you think that there should be different standards and tests in different states?”
Who’s Behind the Common Core Curriculum?
Written by Sam Blumenfeld
Like so many education reform initiatives that seem to arise out of nowhere, the Common Core State Standards is another of these sweeping phantom movements that have gotten their impetus from a cadre of invisible human beings endowed with inordinate power to impose their ideas on everybody.
For example, the idea of collecting intimate personal data on public school students and teachers seems to have arisen spontaneously in the bowels of the National Center for Education Statistics in Washington. It required a small army of education psychologists to put together the data handbooks, which are periodically expanded to include more personal information.
Nobody knows who exactly authorized the creation of such a dossier on every student and teacher in American public schools, but the program exists and is being paid for by the taxpayer. And strange as it may seem, it arose seemingly out of nowhere, like a vampire, to suck the freedom out of the American people. Unlike Santa’s elves who work behind the scenes to bring happiness to children, these subterranean phantoms work overtime to find ways of making American children miserable.
The Common Core State Standards (CCSS) is another such vampire calculated not only to suck the freedom out of the American people, but also to suck out the brains of their children. And all of this is planned in the dark, away from the prying eyes of parents and writers like me. Ask any educator: “Who is the author of the Common Core Standards?” and they will not be able to tell you.
So I decided to look into the origin of the CCSS. It is said that it originated with the National Governors Association (NGA). When and where? At what meeting? At whose behest? The NGA’s Mission Statement says on its website:
The Common Core State Standards provide a consistent, clear understanding of what students are expected to learn, so teachers and parents know what they need to do to help them. The standards are designed to be robust and relevant to the real world, reflecting the knowledge and skills that our young people need for success in college and careers. With American students fully prepared for the future, our communities will be best positioned to compete successfully in the global economy.
Sounds wonderful. But why do we need it? Why are we re-inventing the wheel? Didn’t our public schools provide a decent education for the “greatest generation” when they were in school? That generation not only learned enough to win World War II but also enough to create the scientific foundation of our high-tech society. The only reason why we need the CCSS is because all of these graduate educationists need something to do to justify their degrees and the salaries that go with them. And of course the new curriculum will cost billions of dollars which will enable these vampires to live in the style to which they’ve become accustomed. By the way, if you object to my referring to these people as vampires, feel free to use your own designations.
The CCSS adds nothing to what we know about how to teach reading. It adds nothing to how we teach arithmetic and mathematics. It adds nothing to how we teach history, geography, and the “social studies.” In short, it is a fraud to get the American taxpayer to shell out big bucks for something that we already know how to do. Yes, science has greatly expanded, but it also expanded from 1850 to 1950 and didn’t require a different methodology from the scientific method developed by the great scientists of the past. We may have better equipment which students of science must learn to operate, but the scientific method has not changed.
And of course, the CCSS were made to be as complicated as possible so that no parent or normal human being could understand them. For example, there is something called “Common Core State Standards Official Identifiers and XML Representation.” It states:
As states, territories, the District of Columbia, and the Department of Defense Education Activity move from widespread adoption of the Common Core State Standards (CCSS) to implementation, there is a need to appropriately identify and link assets using a shared system of identifiers and a common XML representation. The Council of Chief State School Officers (CCSSO) and National Governors Association Center for Best Practices (NGA Center), working closely with the standards authors, have released an official, viable approach for publishing identifiers and XML designation to represent the standards, consistent with their adopted format, as outlined below.
So now we know that there is such a body as “the standards authors,” who work closely with such bureaucratic organizations as the Council of Chief State School Officers and the National Governors Association Center for Best Practices. And to make sure that the Standards are being correctly implemented, we read the following in typical vampire language:
De-referenceable Uniform Resource Identifier (URIs) at the corestandards.org domain, e.g. http://corestandards.org/2010/math/content/6/EE/1 or http://corestandards.org/2010/math/practice/MP7. Matching the published identifiers, these dereferenceable URIs allow individuals and technology systems to validate the content of a standard by viewing the web page at the identifier’s uniform resource locator (URL). The NGA Center and CCSSO strongly recommend that http://www.corestandards.org remain the address of record for referring to standards.
What kind of human beings not only write such gobbledegook but also know what it means? And these educationists are among the well-paid elite who know how to make everything so complicated that only they are capable of understanding their own complexity. Here’s more:
Globally unique identifiers (GUIDs), e.g. A7D3275BC52147618D6CFEE43FB1A47E. These allow, when needed, to refer to standards in both disciplines in a common format without removing the differences in the published identifiers. GUIDs are unwieldy for human use, but they are necessarily complex to guarantee uniqueness, an important characteristic for databases, and are intended for use by computer systems. There is no need for educators to decode GUIDs.
Did you read that line, “GUIDS are unwieldy for human use, but they are necessarily complex to guarantee uniqueness”? These people are masters at creating complexity for its own sake. The more complex, the more difficult it is for normal human beings to know what in blazes they are talking about.
What is the National Governors Association for Best Practices? Here is what their website says:
The National Governors Association Center for Best Practices (NGA Center) develops innovative solutions to today’s most pressing public policy challenges and is the only research and development firm that directly serves the nation’s governors….
The mission of NGA Office of Federal Relations is to ensure governors’ views are represented in the shaping of federal policy. Policy positions, reflecting governors’ principles on priority issues, guide the association’s work to influence federal laws and regulations.
The initiative for the Common Core State Standards seems to have arisen from a speech NGA Chairman Governor Paul Patton, Democrat, of Kentucky gave at the NGA meeting on June 12, 2002, in which he said:
Governors are constantly searching for solutions that will help all schools succeed, but some schools require more help than others. The long-term goal for states is to improve overall system performance while closing persistent gaps in achievement between minority and non-minority students. Fortunately, there are places to look for guidance. Although some schools continue to struggle, some have responded successfully to state reform efforts and others have gone far in improving student performance and closing the achievement gap. Current research also suggests there are ways state policies can effectively stimulate and support school improvement.
How that was translated into the need for Common Core State Standards, is not very clear. The Executive Director of the NGA is Dan Crippen, a Washington policy bureaucrat who was director of the Congressional Budget Office from 1999 to 2002. The Director of the NGA Center for Best Practices is David Moore, formerly of the Congressional Budget Office. The Director of the Education Division is Richard Laine. His profile states:
Laine directs research, policy analysis, technical assistance and resource development for the Education Division in the areas of early childhood, K-12, and postsecondary education. The Education Division is working on a number of key policy issues relevant to governors’ efforts to develop and support the implementation of policy, including: birth to 3rd grade access, readiness and quality; the Common Core State Standards, STEM and related assessments; teacher and leader effectiveness; turning around low-performing schools; high school redesign; competency-based learning; charter schools; and postsecondary (higher education & workforce training) access, success & affordability. The Division is also working on policy issues related to bridging the system divides between the early childhood, K-12 and postsecondary systems.
Well now we know who’s in charge of the Common Core State Standards. What is Mr. Laine’s background?
Previous Positions: Director of Education, The Wallace Foundation; Director of Education Policy and Initiatives, Illinois Business Roundtable; Associate Superintendent for Policy, Planning and Resource Management, Illinois State Board of Education; Executive Director, Coalition for Educational Rights; Executive Secretary, Committee for Educational Rights; School Finance Analyst, Chicago Panel on Public School Policy and Finance; Associate Director, California Democratic Congressional Delegation.
Education: M.P.P., M.B.A. and Certificate of Advanced Study in Education Administration and Public Policy, University of Chicago; B.A., University of California — Santa Barbara.
Obviously, Mr. Laine is one of those invisible bureaucrats who create policies for the governors, few of whom ever read them. He was Associate Director of California’s Democratic Congressional Delegation, which includes some of the worst left-wing members of Congress. He’s also in charge of “birth to 3rd grade access,” which the National Education Association strongly favors. Among Mr. Laine’s staff is Albert Wat, whose expertise is Early Childhood Education. His profile states:
Wat provides state policymakers with analyses and information on promising practices and the latest research in early childhood education policy, from birth through third grade. His work focuses on preschool education systems and alignment of early childhood and early elementary practices and policies, including standards, assessments and data systems.
Previous Positions: Research Manager, Senior Research Associate and State Policy Analyst, The Pew Charitable Trusts, Pew Center on the States, Pre-K Now.
Education: Master of Arts in Education Policy Studies, The George Washington University; Nonprofit Management Executive Certificate, Georgetown University; Master of Arts in Education, with focus in Social Sciences in Education and Bachelor of Arts in Psychology, with Distinction, Stanford University.
Like so many Washington policy wonks, Mr. Wat has to justify his bureaucratic position by thinking up new ways to create costly education reform that no freedom- loving citizen wants. Note his and Mr. Laine’s interest in “birth to 3rd grade” education, an area traditionally left up to parents. But then the totalitarian mind wants control over everything and everybody.
In other words, the Common Core State Standards have no more legitimacy than the plans of your local village idiot to reform education. They are the thought emanations of those who have nothing better to do. Yet, they will cost the American taxpayer billions of dollars and make American public education more confusing than ever.
Common Core State Standards Initiative
The Common Core State Standards Initiative is a U.S. education initiative that seeks to bring diverse state curricula into alignment with each other by following the principles of standards-based education reform. The initiative is sponsored by the National Governors Association (NGA) and the Council of Chief State School Officers (CCSSO).
The past twenty years in the U.S. have also been termed the “Accountability Movement,” as states are being held to mandatory tests of student achievement, which are expected to demonstrate a common core of knowledge that all citizens should have to be successful in this country. As part of this overarching education reform movement, the nation’s governors and corporate leaders founded Achieve, Inc. in 1996 as a bi-partisan organization to raise academic standards, graduation requirements, improve assessments, and strengthen accountability in all 50 states. The initial motivation for the development of the Common Core State Standards was part of the American Diploma Project (ADP).
A report titled, “Ready or Not: Creating a High School Diploma That Counts,” from 2004 found that both employers and colleges are demanding more of high school graduates than in the past. According to Achieve, Inc., “current high-school exit expectations fall well short of [employer and college] demands.” The report explains that the major problem currently facing the American school system is that high school graduates were not provided with the skills and knowledge they needed to succeed. “While students and their parents may still believe that the diploma reflects adequate preparation for the intellectual demands of adult life, in reality it falls far short of this common-sense goal.” (page 1). The report continues that the diploma itself lost its value because graduates could not compete successfully beyond high school, and that the solution to this problem is a common set of rigorous standards.
In 2009 the National Governors Association hired David Coleman and Student Achievement to write curriculum standards in the areas of literacy and mathematics instruction. Announced on June 1, 2009, the initiative’s stated purpose is to “provide a consistent, clear understanding of what students are expected to learn, so teachers and parents know what they need to do to help them.” Additionally, “The standards are designed to be robust and relevant to the real world, reflecting the knowledge and skills that our young people need for success in college and careers,” which will place American students in a position in which they can compete in a global economy. Forty-five of the fifty states in the United States are members of the initiative, with the states of Texas, Virginia, Alaska, and Nebraska not adopting the initiative at a state level. Minnesota has adopted the English Language Arts standards but not the Mathematics standards.
Standards were released for mathematics and English language arts on June 2, 2010, with a majority of states adopting the standards in the subsequent months. (See below for current status.) States were given an incentive to adopt the Common Core Standards through the possibility of competitive federal Race to the Top grants. President Obama and Secretary of Education Arne Duncan announced the Race to the Top competitive grants on July 24, 2009, as a motivator for education reform. To be eligible, states had to adopt “internationally benchmarked standards and assessments that prepare students for success in college and the work place.” This meant that in order for a state to be eligible for these grants, the states had to adopt the Common Core State Standards or a similar career and college readiness curriculum. The competition for these grants provided a major push for states to adopt the standards. The adoption dates for those states that chose to adopt the Common Core State Standards Initiative are all within the two years following this announcement. The common standards are funded by the governors and state schools chiefs, with additional support from the Bill and Melinda Gates Foundation, the Charles Stewart Mott Foundation, and others. States are planning to implement this initiative by 2015 by basing at least 85% of their state curricula on the Standards.
In 2010, Standards were released for English language arts and mathematics. Standards have not yet been developed for science or social studies.
English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects
The stated goal of the English & Language Arts and Literacy in History/Social Studies, Science, and Technical Subjects standards is to ensure that students are college and career ready in literacy no later than the end of high school (page 3). There are five key components to the standards for English and Language Arts: Reading, Writing, Speaking and Listening, Language, and Media and Technology. The essential components and breakdown of each of these key points within the standards are as follows:
- As students advance through each grade, there is an increased level of complexity to what students are expected to read and there is also a progressive development of reading comprehension so that students can gain more from what they read.
- There is no reading list to accompany the reading standards. Instead, students are simply expected to read a range of classic and contemporary literature as well as challenging informative texts from an array of subjects. This is so that students can acquire new knowledge, insights, and consider varying perspectives as they read. Teachers, school districts, and states are expected to decide on the appropriate curriculum, but sample texts are included to help teachers, students, and parents prepare for the year ahead.
- There is some critical content for all students — classic myths and stories from around the world, foundational U.S. documents, seminal works of American literature, and the writings of Shakespeare — but the rest is left up to the states and the districts.
- The driving force of the writing standards is logical arguments based on claims, solid reasoning, and relevant evidence. The writing also includes opinion writing even within the K–5 standards.
- Short, focused research projects, similar to the kind of projects students will face in their careers as well as long-term, in-depth research is another important piece of the writing standards. This is because written analysis and the presentation of significant findings is critical to career and college readiness.
- The standards also include annotated samples of student writing to help determine performance levels in writing arguments, explanatory texts, and narratives across the grades.
Speaking and Listening
- Although reading and writing are the expected components of an ELA curriculum, standards are written so that students gain, evaluate, and present complex information, ideas, and evidence specifically through listening and speaking.
- There is also an emphasis on academic discussion in one-on-one, small-group, and whole-class settings, which can take place as formal presentations as well as informal discussions during student collaboration.
- Vocabulary instruction in the standards takes place through a mix of conversations, direct instruction, and reading so that students can determine word meanings and can expand their use of words and phrases.
- The standards expect students to use formal English in their writing and speaking, but also recognize that colleges and 21st century careers will require students to make wise, skilled decisions about how to express themselves through language in a variety of contexts.
- Vocabulary and conventions are their own strand because these skills extend across reading, writing, speaking, and listening.
Media and Technology
- Since media and technology are intertwined with every student’s life and in school in the 21st century, skills related to media use, which includes the analysis and production of various forms of media, are also included in these standards.
Preliminary “example” works to be studied by students include works by Ovid, Atul Gawande, Voltaire, Shakespeare, Turgenev, Poe, Robert Frost, Yeats, Nathaniel Hawthorne, Amy Tan, and Julia Alvarez.
Cursive and keyboarding
The standards do not mandate the teaching of cursive handwriting, although states are free either to add a cursive requirement or to permit individual school districts to require it. The standards include instruction in keyboarding.
The stated goal of the mathematics Standards is to achieve greater focus and coherence in the curriculum (page 3). This is largely in response to the criticism that American mathematics curricula are “a mile wide and an inch deep”.
The mathematics Standards include Standards for Mathematical Practice and Standards for Mathematical Content.
The Standards mandate that eight principles of mathematical practice be taught:
- Make sense of problems and persevere in solving them.
- Reason abstractly and quantitatively.
- Construct viable arguments and critique the reasoning of others.
- Model with mathematics.
- Use appropriate tools strategically.
- Attend to precision.
- Look for and make use of structure.
- Look for and express regularity in repeated reasoning.
The practices are adapted from the five process standards of the National Council of Teachers of Mathematics and the five strands of proficiency in the National Research Council’s Adding It Up report. These practices are to be taught in every grade from kindergarten to twelfth grade. Details of how these practices are to be connected to each grade level’s mathematics content are left to local implementation of the Standards.
As an example of mathematical practice, here is the full description of the sixth practice:
6 Attend to precision.
Mathematically proficient students try to communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose, including using the equal sign consistently and appropriately. They are careful about specifying units of measure, and labeling axes to clarify the correspondence with quantities in a problem. They calculate accurately and efficiently, express numerical answers with a degree of precision appropriate for the problem context. In the elementary grades, students give carefully formulated explanations to each other. By the time they reach high school they have learned to examine claims and make explicit use of definitions.
The Standards lay out the mathematics content that should be learned at each grade level from kindergarten to Grade 8 (age 13-14), as well as the mathematics to be learned in high school. The Standards do not dictate any particular pedagogy or what order topics should be taught within a particular grade level. Mathematical content is organized in a number of domains. At each grade level there are several standards for each domain, organized into clusters of related standards. (See examples below.)
Four domains are included in each of the grades from kindergarten (age 5-6) to fifth grade (age 10-11):
- Operations and Algebraic Thinking;
- Number and Operations in Base 10;
- Measurement and Data;
Kindergarten also includes the domain Counting and Cardinality. Grades 3 to 5 also include the domain Number and Operations–Fractions.
Four domains are included in each of the Grades 6 through 8:
- The Number System;
- Expressions and Equations;
- Statistics and Probability.
Grades 6 and 7 also include the domain Ratios and Proportional Relationships. Grade 8 includes the domain Functions.
In addition to detailed standards (of which there are 21 to 28 for each grade from kindergarten to eighth grade), the Standards present an overview of “critical areas” for each grade. (See examples below.)
In high school (Grades 9 to 12), the Standards do not specify which content is to be taught at each grade level. Up to Grade 8, the curriculum is integrated; students study four or five different mathematical domains every year. The Standards do not dictate whether the curriculum should continue to be integrated in high school with study of several domains each year (as is done in other countries, as well as New York and Georgia), or whether the curriculum should be separated out into separate year-long algebra and geometry courses (as has been the tradition in most U.S. states). An appendix to the Standards describes four possible pathways for covering high school content (two traditional and two integrated), but states are free to organize the content any way they want.
There are six conceptual categories of content to be covered at the high school level:
- Number and quantity;
- Statistics and probability.
Some topics in each category are indicated only for students intending to take more advanced, optional courses such as calculus, advanced statistics, or discrete mathematics. Even if the traditional sequence is adopted, functions and modeling are to be integrated across the curriculum, not taught as separate courses. In fact, modeling is also a Mathematical Practice (see above), and is meant to be integrated across the entire curriculum beginning in kindergarten. The modeling category does not have its own standards; instead, high school standards in other categories which are intended to be considered part of the modeling category are indicated in the Standards with a star symbol.
Each of the six high school categories includes a number of domains. For example, the “number and quantity” category contains four domains: the real number system; quantities; the complex number system; and vector and matrix quantities. The “vector and matrix quantities” domain is reserved for advanced students, as are some of the standards in “the complex number system”.
Examples of mathematical content
Second grade example: In the second grade there are 26 standards in four domains. The four critical areas of focus for second grade are (1) extending understanding of base-ten notation; (2) building fluency with addition and subtraction; (3) using standard units of measure; and (4) describing and analyzing shapes. Below are the second grade standards for the domain of “operations and algebraic thinking” (Domain 2.OA). This second grade domain contains four standards, organized into three clusters:
- Represent and solve problems involving addition and subtraction.
- 1. Use addition and subtraction within 100 to solve one- and two-step word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.
- Add and subtract within 20.
- 2. Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers.
- Work with equal groups of objects to gain foundations for multiplication.
- 3. Determine whether a group of objects (up to 20) has an odd or even number of members, e.g., by pairing objects or counting them by 2s; write an equation to express an even number as a sum of two equal addends.
- 4. Use addition to find the total number of objects arranged in rectangular arrays with up to 5 rows and up to 5 columns; write an equation to express the total as a sum of equal addends.
Domain example: As an example of the development of a domain across several grades, here are the clusters for learning fractions (Domain NF, which stands for “Number and Operations—Fractions”) in Grades 3 through 6. Each cluster contains several standards (not listed here):
- Grade 3:
- Develop an understanding of fractions as numbers.
- Extend understanding of fraction equivalence and ordering.
- Build fractions from unit fractions by applying and extending previous understandings of operations on whole numbers.
- Understand decimal notation for fractions, and compare decimal fractions.
- Use equivalent fractions as a strategy to add and subtract fractions.
- Apply and extend previous understandings of multiplication and division to multiply and divide fractions.
In Grade 6, there is no longer a “number and operations—fractions” domain, but students learn to divide fractions by fractions in the number system domain.
High school example: As an example of a high school category, here are the domains and clusters for algebra. There are four algebra domains (in bold below), each of which is broken down into as many as four clusters (bullet points below). Each cluster contains one to five detailed standards (not listed here). Starred standards, such as the Creating Equations domain (A-CED), are also intended to be part of the modeling category.
- Seeing Structure in Expressions (A-SSE)
- Interpret the structure of expressions
- Write expressions in equivalent forms to solve problems
- Arithmetic with Polynomials and Rational Functions (A-APR)
- Perform arithmetic operations on polynomials
- Understand the relationship between zeros and factors of polynomials
- Use polynomial identities to solve problems
- Rewrite rational expressions
- Creating Equations.★ (A-CED)
- Create equations that describe numbers or relationships
- Reasoning with Equations and Inequalities (A-REI)
- Understand solving equations as a process of reasoning and explain the reasoning
- Solve equations and inequalities in one variable
- Solve systems of equations
- Represent and solve equations and inequalities graphically
As an example of detailed high school standards, the first cluster above is broken down into two standards as follows:
- Interpret the structure of expressions
- 1. Interpret expressions that represent a quantity in terms of its context.★
- a. Interpret parts of an expression, such as terms, factors, and coefficients.
- b. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret P(1+r)n as the product of P and a factor not depending on P.
- 2. Use the structure of an expression to identify ways to rewrite it. For example, see x4 – y4 as (x2)2 – (y2)2, thus recognizing it as a difference of squares that can be factored as (x2 – y2)(x2 + y2).
Different standards, by state
States have individual variations on implementing the standards.
- Emphasize basic arithmetic, fractions in elementary school. Focus on memorization instead of reliance on calculators.
- An Algebra I capability is perceived for elementary school graduates; Algebra II for high school graduates.
- Improve difficulty level of books being read. Less emphasis on how students “feel” about a book and more on analyzing content.
- Testing by computer is planned with results available almost “instantly.”
Critics question forcing a rigid template on schools already coping with other initiatives like No Child Left Behind. For some states, this will be the third (or more) major change over the past 16 years.
Some critics also question whether there is a demand for creating state standards to begin with. According to the NGA and the CCSSO one motivating factor is the U.S.’s ranking on international test results; however, there does not seem to be a relationship between the US’s low score on these tests and the US’s economic ranking. The United States has ranked 1st or 2nd on the World Economic Forum since 1998 despite scoring near the bottom on the International Mathematics and Science Studies for the past 50 years.
In June 2011, the Voice of America Special English reported on the common core standards on its weekly Education Report for people learning American English. Some commentators criticized the idea that “one size fits all.”
In a Huffington Post piece, “Do We Need a Common Core?”, Nicholas Tampio raised two objections to the Common Core. First, he suggests the importance of “America’s historical commitment to local control over school districts,” and the second is his anecdotal discussion of the Common Core claims that the program provide appropriate benchmarks to all students everywhere. He recounts the changes in his son’s kindergarten as the teacher began spending more time teaching from the Common Core curriculum, and says an “inspired kindergarten curriculum has been replaced with a banal one.”
Adoption of Common Core Standards by states
The chart below contains the adoption status of the Common Core Standards as of January 15, 2013. Texas and Alaska are the only states that are not members of the initiative. Nebraska and Virginia are members but have decided not to adopt the standards. Minnesota rejected the Common Core Standards for mathematics, but accepted the English/Language Arts standards. The District of Columbia, the U.S. Virgin Islands, Guam, the Northern Mariana Islands, and the American Samoa Islands have also adopted the standards. Puerto Rico has not adopted the standards.
||Formally adopted; repeal legislation introduced in upper and lower houses, February, 2013
|District of Columbia
||Formally adopted; repealed in State Senate on February 21, 2013
||Adopted (English standards only, math standards rejected)
||Initiative member (will not adopt)
||Initiative member (will not adopt)
With the implementation of new standards, states are also required to adopt new assessment benchmarks to measure student achievement. According to the Common Core State Standards Initiative website, formal assessment is expected to take place in the 2014–2015 school year, which coincides with the projected implementation year for most states. The assessment has yet to be created, but two consortiums were generated with two different approaches as to how to assess the standards. “26 states formed the PARCC RttT Assessment Consortium. Their approach focused on computer-based ‘through-course assessments’ in each grade combined with streamlined end of year tests, including performance tasks.” The second consortium, “the SMARTER Balance Consortium, brought together 31 states proposing to create adaptive online exams.” The final decision of which assessment to use will be determined by individual state education agencies. The Common Core State Standards website explained that some states plan to work together to create a common, universal assessment system based on the common core state standards while other states are choosing to work independently or through these two consortiums to develop the assessment. Both of these leading consortiums are proposing computer-based exams that include fewer selected and constructed response test items, which moves away from what we typically think of as the Standardized Test most students are currently taking. This kind of assessment would be better aligned to college and career readiness, but does pose some interesting challenges considering the limited computer and technology resources available to some schools.
- ^ Gibbs, T. H. and Howley, A. (2000). “”World-Class Standards” and Local Pedagogies: Can We Do Both?” Thresholds in Education. ERIC Publications. 51 – 55.
- ^ “About Achieve.” (2011) Achieve, Inc. http://www.achieve.org/about-achieve
- ^ “Closing the Expectations Gap 2011: Sixth Annual 50-State Progress Report.” (2011). Achieve, Inc. <http://www.achieve.org/ClosingtheExpectationsGap2011>
- ^ “Ready or Not: Creating a High School Diploma That Counts.” (2004) Achieve, Inc. <http://www.achieve.org/ReadyorNot>
- ^ a b c “Ready or Not”
- ^ NGA Press Release announcing the Common State Standards Initiative
- ^ a b http://www.corestandards.org
- ^ http://www.corestandards.org/in-the-states States adopting the Core Standards
- ^ a b http://minnesota.publicradio.org/display/web/2012/06/12/daily-circuit-minnesota-adopting-common-core
- ^ Department of Education. President Obama, U.S. Secretary of Education Duncan Announce National Competition to Advance School Reform. Ed.gov. 24 July 2009. Web. 10 Oct. 2011. <http://www2.ed.gov/news/pressreleases/2009/07/07242009.html>
- ^ “U.S Department of Education”
- ^ Fletcher, G. H. (2010). “Race to the Top: No District Left Behind.” T. H. E Journal 37 (10): 17 – 18.
- ^ a b http://www.corestandards.org
- ^ Anderson, Nick (March 10, 2010). “Common set of school standards to be proposed”. Washington Post. p. A1.
- ^ a b c d Walsh, Molly (14 September 2010). “Vermont joins 30 otherws in Common Core”. Burlington, Vermont: Burlington Free Press. pp. 1B.
- ^ http://www.corestandards.org/assets/CCSSI_ELA%20Standards.pdf
- ^ a b c d e f g h i j k l m “Key Points in English Language Arts. (2011). <http://www.corestandards.org/about-the-standards/key-points-in-english-language-arts>
- ^ “Hawaii No Longer Requires Teaching Cursive In Schools”. Huffpost Education. 1 August 2011.
- ^ mathematics Standards
- ^ Garfunkel, S. A. (2010). “The National Standards Train: You Need to Buy Your Ticket.” UMAP J 31 (4): 277 – 280.
- ^ appendix
- ^ a b Tienken, C. H. (2010). “Common Core State Standards: I Wonder?” Kappa Delta Pi Rec 47 (1): 14 – 17.
- ^ Transcript and MP3 of part one:Should All US Students Learn the Same Thing?
- ^ Part two: No National Standards: Strength or Weakness for Schools in US?
- ^ In the States (Common Core Standards Initiative website)
- ^ “Legislation would block Alabama from implementing national curriculum standards (updated)” Alabama Media Group, http://blog.al.com/wire/2013/02/legislation_would_block_alabam.html
- ^ “Nebraska one of few states not adopting standards”. The Grand Island Independent. 2013-01-05.
- ^ “Virginia’s stance against national standards is a blow for students”. The Washington Post. 2010-06-05.
- ^ “Common Core State Standards and Assessment Coalitions.” Education Insider. 9 Sept. 2010. Web. 10 Oct. 2011. <http://www.whiteboardadvisors.com/research/education-insider- common-core-standards-and-assessment-coalitions>
- ^ a b “Common Core State Standards and Assessment Coalitions”
- ^ “Common Core State Standards: In the States”
For resources to use in the classroom visit http://www.commoncoreconversation.com/
Related Posts On Pronk Palisades
Read Full Post
| Make a Comment ( None so far )
John Stossel – DDT
Demonizing DDT: Challenging The Scare Campaign That Has Cost Millions of Lives
“In The Excellent Powder: DDT’s Political and Scientific History, Richard Tren and Donald Roberts argue that the infamous insecticide is the world’s greatest public-health success stories, saving millions of lives by preventing insect-borne disease. Unfortunately for those in areas still infested with mosquitoes and other flying bugs, DDT is also the world’s most-misunderstood substance, the target of a decades-long scientifically ignorant and ideologically motivated campaign that has vastly limited its use and applications.
From Rachel Carson in the 1960s to contemporary critics, DDT has been the object of what Roberts, a professor of tropical public health at the Uniformed Services University of the Health Sciences, calls “scare campaigns” that link DDT to “theoretical harms to wildlife and human life that simply don’t exist.”
Dubbed “the excellent powder” by Winston Churchill for its life-saving qualities, DDT has the potential to transform the developing world from a malarial hell into something else again. Yet as Tren, the winner of the 2009 Julian L. Simon Award, warns, under current international conventions, global DDT production is scheduled to be halted in 2017, thereby consigning much of the world to less-effective and more-expensive alternatives that will consign millions of poor people to living hell.
Reason.tv’s Nick Gillespie sat down with Tren and Roberts, who are part of Africa Fighting Malaria, to talk about how DDT got such a bad rap and what can be done to set the record straight.”
15-108 Science Matters:DDT & Modern Environmental Movement II
“…Malaria is a mosquito-borne infectious disease of humans and other animals caused by protists (a type of microorganism) of the genus Plasmodium. The protists first infect the liver, then act as parasites within red blood cells, causing symptoms that typically include fever and headache, in severe cases progressing to coma or death. The disease is widespread in tropical and subtropical regions in a broad band around the equator, including much of Sub-Saharan Africa, Asia, and the Americas.
Five species of Plasmodium can infect and be transmitted by humans. Severe malaria is largely caused by P. falciparum while the disease caused by P. vivax, P. ovale, and P. malariae is generally a milder form that is rarely fatal. The zoonotic species P. knowlesi, prevalent in Southeast Asia, causes malaria in macaques but can also cause severe infections in humans. Malaria is prevalent in tropical regions because the significant amounts of rainfall, consistently high temperatures and high humidity, along with stagnant waters in which mosquito larvae readily mature, provide them with the environment they need for continuous breeding. Disease transmission can be reduced by preventing mosquito bites by distribution of mosquito nets and insect repellents, or with mosquito-control measures such as spraying insecticides and draining standing water.
The World Health Organization has estimated that in 2010, there were 216 million documented cases of malaria. Around 655,000 people died from the disease, many of whom were children under the age of five. The actual number of deaths may be significantly higher, as precise statistics are unavailable in many rural areas, and many cases are undocumented. P. falciparum — responsible for the most severe form of malaria — causes the vast majority of deaths associated with the disease. Malaria is commonly associated with poverty, and can indeed be a cause of poverty and a major hindrance to economic development.
Despite a clear need, no vaccine offering a high level of protection currently exists. Efforts to develop one are ongoing. Several medications are available to prevent malaria in travelers to malaria-endemic countries (prophylaxis). A variety of antimalarial medications are available. Severe malaria is treated with intravenous or intramuscular quinine or, since the mid-2000s, the artemisinin derivative artesunate, which is superior to quinine in both children and adults and is given in combination with a second anti-malarial such as mefloquine. Resistance has developed to several antimalarial drugs, most notably chloroquine and artemisinin.
Signs and symptoms
Main symptoms of malaria
Typical fever patterns of malaria
The signs and symptoms of malaria typically begin 8–25 days following infection. However, symptoms may occur later in those who have taken antimalarial medications as prevention. The presentation may include fever, shivering, arthralgia (joint pain), vomiting, hemolytic anemia, jaundice, hemoglobinuria, retinal damage, and convulsions. Approximately 30% of people however will no longer have a fever upon presenting to a health care facility.
The classic symptom of malaria is cyclical occurrence of sudden coldness followed by rigor and then fever and sweating lasting about two hours or more, occurring every two days in P. vivax and P. ovale infections, and every three days for P. malariae. P. falciparum infection can cause recurrent fever every 36–48 hours or a less pronounced and almost continuous fever. For reasons that are poorly understood, but that may be related to high intracranial pressure, children with malaria frequently exhibit abnormal posturing, a sign indicating severe brain damage. Cerebral malaria is associated with retinal whitening, which may be a useful clinical sign in distinguishing malaria from other causes of fever.
Severe malaria is usually caused by P. falciparum, and typically arises 6–14 days after infection. Non-falciparum species have however been found to be the cause of ~14% of cases of severe malaria in some groups. Consequences of severe malaria include coma and death if untreated—young children and pregnant women are especially vulnerable. Splenomegaly (enlarged spleen), severe headache, cerebral ischemia, hepatomegaly (enlarged liver), hypoglycemia, and hemoglobinuria with renal failure may occur. Renal failure is a feature of blackwater fever, where hemoglobin from lysed red blood cells leaks into the urine.
A Plasmodium sporozoite traverses the cytoplasm of a mosquito midgut epithelial cell in this false-colour electron micrograph.
Malaria parasites are from the genus Plasmodium (phylum Apicomplexa). In humans, malaria is caused by P. falciparum, P. malariae, P. ovale, P. vivax and P. knowlesi. Among those infected, P. falciparum is the most common species identified (~75%) followed by P. vivax (~20%). P. falciparum accounts for the majority of deaths. P. vivax proportionally is more common outside of Africa. There have been documented human infections with several species of Plasmodium from higher apes; however, with the exception of P. knowlesi—a zoonotic species that causes malaria in macaques—these are mostly of limited public health importance.
The definitive hosts for malaria parasites are female mosquitoes of the Anopheles genus, which act as transmission vectors to humans and other vertebrates, the secondary hosts. Young mosquitoes first ingest the malaria parasite by feeding on an infected vertebrate carrier and the infected Anopheles mosquitoes eventually carry Plasmodium sporozoites in their salivary glands. A mosquito becomes infected when it takes a blood meal from an infected vertebrate. Once ingested, the parasite gametocytes taken up in the blood will further differentiate into male or female gametes and then fuse in the mosquito’s gut. This produces an ookinete that penetrates the gut lining and produces an oocyst in the gut wall. When the oocyst ruptures, it releases sporozoites that migrate through the mosquito’s body to the salivary glands, where they are then ready to infect a new human host. The sporozoites are injected into the skin, alongside saliva, when the mosquito takes a subsequent blood meal. This type of transmission is occasionally referred to as anterior station transfer.
Only female mosquitoes feed on blood; male mosquitoes feed on plant nectar, and thus do not transmit the disease. The females of the Anopheles genus of mosquito prefer to feed at night. They usually start searching for a meal at dusk, and will continue throughout the night until taking a meal. Malaria parasites can also be transmitted by blood transfusions, although this is rare.
Malaria recurs after treatment for three reasons. Recrudescence occurs when parasites are not cleared by treatment, whereas reinfection indicates complete clearance with new infection established from a separate infective mosquito bite; both can occur with any malaria parasite species. Relapse is specific to P. vivax and P. ovale and involves re-emergence of blood-stage parasites from latent parasites (hypnozoites) in the liver. Describing a case of malaria as cured by observing the disappearance of parasites from the bloodstream can, therefore, be deceptive. The longest incubation period reported for a P. vivax infection is 30 years. Approximately one in five of P. vivax malaria cases in temperate areas involve overwintering by hypnozoites, with relapses beginning the year after the mosquito bite.
The life cycle of malaria parasites. A mosquito causes infection by taking a blood meal. First, sporozoites enter the bloodstream, and migrate to the liver. They infect liver cells, where they multiply into merozoites, rupture the liver cells, and return to the bloodstream. Then, the merozoites infect red blood cells, where they develop into ring forms, trophozoites and schizonts that in turn produce further merozoites. Sexual forms are also produced, which, if taken up by a mosquito, will infect the insect and continue the life cycle.
Malaria infection develops via two phases: one that involves the liver or hepatic system (exoerythrocytic), and one which involves red blood cells, or erythrocytes (erythrocytic). When an infected mosquito pierces a person’s skin to take a blood meal, sporozoites in the mosquito’s saliva enter the bloodstream and migrate to the liver where they infect hepatocytes, multiplying asexually and asymptomatically for a period of 8–30 days. After a potential dormant period in the liver, these organisms differentiate to yield thousands of merozoites, which, following rupture of their host cells, escape into the blood and infect red blood cells to begin the erythrocytic stage of the life cycle. The parasite escapes from the liver undetected by wrapping itself in the cell membrane of the infected host liver cell.
Within the red blood cells, the parasites multiply further, again asexually, periodically breaking out of their hosts to invade fresh red blood cells. Several such amplification cycles occur. Thus, classical descriptions of waves of fever arise from simultaneous waves of merozoites escaping and infecting red blood cells.
Some P. vivax sporozoites do not immediately develop into exoerythrocytic-phase merozoites, but instead produce hypnozoites that remain dormant for periods ranging from several months (6–12 months is typical) to as long as three years. After a period of dormancy, they reactivate and produce merozoites. Hypnozoites are responsible for long incubation and late relapses in P. vivax infections, although their existence in P. ovale is uncertain.
The parasite is relatively protected from attack by the body’s immune system because for most of its human life cycle it resides within the liver and blood cells and is relatively invisible to immune surveillance. However, circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P. falciparum parasite displays adhesive proteins on the surface of the infected blood cells, causing the blood cells to stick to the walls of small blood vessels, thereby sequestering the parasite from passage through the general circulation and the spleen. The blockage of the microvasculature causes symptoms such as in placental and cerebral malaria. In cerebral malaria the sequestrated red blood cells can breach the blood–brain barrier possibly leading to coma.
Micrograph of a placenta from a stillbirth due to maternal malaria. H&E stain. Red blood cells are anuclear; blue/black staining in bright red structures (red blood cells) indicate foreign nuclei from the parasites
Although the red blood cell surface adhesive proteins (called PfEMP1, for P. falciparum erythrocyte membrane protein 1) are exposed to the immune system, they do not serve as good immune targets, because of their extreme diversity; there are at least 60 variations of the protein within a single parasite and even more variants within whole parasite populations. The parasite switches between a broad repertoire of PfEMP1 surface proteins, thus staying one step ahead of the pursuing immune system.
Some merozoites turn into male and female gametocytes. If a mosquito pierces the skin of an infected person, it potentially picks up gametocytes within the blood. Fertilization and sexual recombination of the parasite occurs in the mosquito’s gut. New sporozoites develop and travel to the mosquito’s salivary gland, completing the cycle. Pregnant women are especially attractive to the mosquitoes, and malaria in pregnant women is an important cause of stillbirths, infant mortality and low birth weight, particularly in P. falciparum infection, but also in other species infection, such as P. vivax.
Main article: Genetic resistance to malaria
Due to the high levels of mortality and morbidity caused by malaria—especially the P. falciparum species—it is thought to have placed the greatest selective pressure on the human genome in recent history. Several diseases may provide some resistance to it including sickle cell disease, thalassaemias, glucose-6-phosphate dehydrogenase deficiency as well as the presence of Duffy antigens on the subject’s red blood cells.
The impact of sickle cell anemia on malaria immunity is of particular interest. Sickle cell anemia causes a defect to the hemoglobin molecule in the blood. Instead of retaining the biconcave shape of a normal red blood cell, the modified hemoglobin S molecule causes the cell to sickle or distort into a curved shape. Due to the sickle shape, the molecule is not as effective in taking or releasing oxygen, and therefore malaria parasites cannot complete their life cycle in the cell. Individuals who are homozygous for sickle cell anemia seldom survive this defect, while those who are heterozygous experience immunity to the disease. Although the potential risk of death for those with the homozygous condition seems to be unfavorable to population survival, the trait is preserved because of the benefits provided by the heterozygous form.
Hepatic dysfunction as a result of malaria is rare and is usually a result of a coexisting liver condition such as viral hepatitis and chronic liver disease. Hepatitis, which is characterized by inflammation of the liver, is not actually present in what is called malarial hepatitis; the term as used here invokes the reduced liver function associated with severe malaria. While traditionally considered a rare occurrence, malarial hepatopathy has seen an increase in malaria endemic areas, particularly in Southeast Asia and India. Liver compromise in people with malaria correlates with a greater likelihood of complications and death.
Main article: Diagnosis of malaria
Malaria is typically diagnosed by the microscopic examination of blood using blood films or using antigen-based rapid diagnostic tests. Rapid diagnostic tests that detect P. vivax are not as effective as those targeting P. falciparum. They also are unable to tell how many parasites are present. Areas that cannot afford laboratory diagnostic tests often use only a history of subjective fever as the indication to treat for malaria. Polymerase chain reaction based tests have been developed, though these are not widely implemented in malaria-endemic regions as of 2012, due to their complexity.
Malaria is divided into severe and uncomplicated by the World Health Organization (WHO). Severe malaria is diagnosed when any of the following criteria are present, otherwise it is considered uncomplicated.
- Decreased consciousness
- Significant weakness such that the person is unable to walk
- Inability to feed
- Two or more convulsions
- Low blood pressure (less than 70 mmHg in adults or 50 mmHg in children)
- Breathing problems
- Circulatory shock
- Kidney failure or hemoglobin in the urine
- Bleeding problems, or hemoglobin less than 5 g/dl
- Pulmonary edema
- Low blood glucose (less than 2.2 mmol/l / 40 mg/dl)
- Acidosis or lactate levels of greater than 5 mmol/l
- A parasite level in the blood of greater than 2%
Anopheles albimanus mosquito feeding on a human arm. This mosquito is a vector of malaria and mosquito control is an effective way of reducing the incidence of malaria.
Methods used to prevent malaria include medications, mosquito eradication and the prevention of bites. The presence of malaria in an area requires a combination of high human population density, high mosquito population density and high rates of transmission from humans to mosquitoes and from mosquitoes to humans. If any of these is lowered sufficiently, the parasite will eventually disappear from that area, as happened in North America, Europe and much of the Middle East. However, unless the parasite is eliminated from the whole world, it could become re-established if conditions revert to a combination that favours the parasite’s reproduction. Many countries are seeing an increasing number of imported malaria cases owing to extensive travel and migration.
Many researchers argue that prevention of malaria may be more cost-effective than treatment of the disease in the long run, but the capital costs required are out of reach of many of the world’s poorest people. There is a wide disparity in the costs of control (i.e. maintenance of low endemicity) and elimination programs between countries. For example, in China—whose government in 2010 announced a strategy to pursue malaria elimination in the Chinese provinces—the required investment is a small proportion of public expenditure on health. In contrast, a similar program in Tanzania would cost an estimated one-fifth of the public health budget.
Man spraying kerosene oil to protect against mosquitoes carrying malaria, Panama Canal Zone 1912
Efforts to eradicate malaria by eliminating mosquitoes have been successful in some areas. Malaria was once common in the United States and southern Europe, but vector control programs, in conjunction with the monitoring and treatment of infected humans, eliminated it from those regions. In some areas, the draining of wetland breeding grounds and better sanitation were adequate. Malaria was eliminated from most parts of the USA in the early 20th century by such methods, and the use of the pesticide DDT and other means eliminated it from the remaining pockets in the South by 1951. (see National Malaria Eradication Program)
Before DDT, malaria was successfully eradicated or controlled in tropical areas like Brazil and Egypt by removing or poisoning the breeding grounds of the mosquitoes or the aquatic habitats of the larva stages, for example by applying the highly toxic arsenic compound Paris Green to places with standing water. This method has seen little application in Africa for more than half a century.
A more targeted and ecologically friendly vector control strategy involves genetic manipulation of malaria mosquitoes. Advances in genetic engineering technologies make it possible to introduce foreign DNA into the mosquito genome and either decrease the lifespan of the mosquito, or make it more resistant to the malaria parasite. Sterile insect technique is a genetic control method whereby large numbers of sterile males mosquitoes are reared and released. Mating with wild females reduces the wild population in the subsequent generation; repeated releases eventually eradicate the target population. Progress towards transgenic, or genetically modified, insects suggests that wild mosquito populations could be made malaria resistant. Successful replacement of current populations with a new genetically modified population relies upon a drive mechanism, such as transposable elements to allow for non-Mendelian inheritance of the gene of interest. Although this approach has been used successfully to eradicate some parasitic diseases of veterinary importance, technological problems have hindered its effective deployment with malaria vector species.
Indoor residual spraying
Further information: Indoor residual spraying and DDT and malaria
Indoor residual spraying (IRS) is the practice of spraying insecticides on the interior walls of homes in malaria-affected areas. After feeding, many mosquito species rest on a nearby surface while digesting the bloodmeal, so if the walls of dwellings have been coated with insecticides, the resting mosquitoes will be killed before they can bite another victim and transfer the malaria parasite.
The first pesticide used for IRS was DDT. Although it was initially used exclusively to combat malaria, its use quickly spread to agriculture. In time, pest control, rather than disease control, came to dominate DDT use, and this large-scale agricultural use led to the evolution of resistant mosquitoes in many regions. The DDT resistance shown by Anopheles mosquitoes can be compared to antibiotic resistance shown by bacteria. The overuse of antibacterial soaps and antibiotics led to antibiotic resistance in bacteria, similar to how overspraying of DDT on crops led to DDT resistance in Anopheles mosquitoes. During the 1960s, awareness of the negative consequences of its indiscriminate use increased, ultimately leading to bans on agricultural applications of DDT in many countries in the 1970s.
The World Health Organization currently advises the use of 12 insecticides in IRS operations, including DDT as well as alternative insecticides (such as the pyrethroids permethrin and deltamethrin). This public health use of small amounts of DDT is permitted under the Stockholm Convention on Persistent Organic Pollutants (POPs), which prohibits the agricultural use of DDT. However, because of its legacy, many developed countries previously discouraged DDT use even in small quantities.
One problem with all forms of IRS is insecticide resistance via evolution. Mosquitoes that are affected by IRS tend to rest and live indoors, and due to the irritation caused by spraying, their descendants tend to rest and live outdoors, meaning that they are not as affected—if affected at all—by the IRS, which greatly reduces its effectiveness as a defense mechanism.
Main article: Mosquito net
Mosquito nets create a protective barrier against malaria-carrying mosquitoes that bite at night.
Mosquito nets help keep mosquitoes away from people and significantly reduce infection rates and transmission of malaria. The nets are not a perfect barrier and they are often treated with an insecticide designed to kill the mosquito before it has time to search for a way past the net. Insecticide-treated nets (ITNs) are estimated to be twice as effective as untreated nets and offer greater than 70% protection compared with no net. Although ITNs are proven to be very effective against malaria, only about 13% of households in sub-Saharan countries own them. Since the Anopheles mosquitoes feed at night, the preferred method is to hang a large “bed net” above the center of a bed to drape over it completely.
Community participation and health education strategies promoting awareness of malaria and the importance of control measures have been successfully used to reduce the incidence of malaria in some areas of the developing world. Recognizing the disease in the early stages can stop the disease from becoming fatal. Education can also inform people to cover over areas of stagnant, still water, such as water tanks that are ideal breeding grounds for the parasite and mosquito, thus cutting down the risk of the transmission between people. This is generally used in urban areas where there are large centers of population in a confined space and transmission would be most likely in these areas.
Other interventions for the control of malaria include mass drug administrations and intermittent preventive therapy.
Main article: Malaria prophylaxis
Several drugs, most of which are used for treatment of malaria, can be taken to prevent contracting the disease during travel to endemic areas. Chloroquine may be used where the parasite is still sensitive. However, due to resistance one of three medications—mefloquine (Lariam), doxycycline (available generically), or the combination of atovaquone and proguanil hydrochloride (Malarone)—is frequently needed. Doxycycline and the atovaquone and proguanil combination are the best tolerated; mefloquine is associated with higher rates of neurological and psychiatric symptoms.
The prophylactic effect does not begin immediately upon starting the drugs, so people temporarily visiting malaria-endemic areas usually begin taking the drugs one to two weeks before arriving and should continue taking them for four weeks after leaving (with the exception of atovaquone proguanil that only needs to be started two days prior and continued for seven days afterwards). Generally, these drugs are taken daily or weekly, at a lower dose than is used for treatment of a person who contracts the disease. Use of prophylactic drugs is seldom practical for full-time residents of malaria-endemic areas, and their use is usually restricted to short-term visitors and travelers to malarial regions. This is due to the cost of purchasing the drugs, negative adverse effects from long-term use, and because some effective anti-malarial drugs are difficult to obtain outside of wealthy nations. The use of prophylactic drugs where malaria-bearing mosquitoes are present may encourage the development of partial immunity.
Further information: Antimalarial medication
The treatment of malaria depends on the severity of the disease; whether people can take oral drugs or must be admitted depends on the assessment and the experience of the clinician.
Uncomplicated malaria may be treated with oral medications. The most effective strategy for P. falciparum infection is the use of artemisinins in combination with other antimalarials (known as artemisinin-combination therapy). This is done to reduce the risk of resistance against artemisinin. These additional antimalarials include amodiaquine, lumefantrine, mefloquine or sulfadoxine/pyrimethamine. Another recommended combination is dihydroartemisinin and piperaquine. In the 2000s (decade), malaria with partial resistance to artemisins emerged in Southeast Asia.
Severe malaria requires the parenteral administration of antimalarial drugs. Until the mid-2000s the most used treatment for severe malaria was quinine, but artesunate has been shown to be superior to quinine in both children and adults. Treatment of severe malaria also involves supportive measures that are optimally performed in a critical care unit, including management of high fevers (hyperpyrexia) and the subsequent seizures that may result from it, and monitoring for respiratory depression, hypoglycemia, and hypokalemia. Infection with P. vivax, P. ovale or P. malariae is usually treated on an outpatient basis (while a person is at home). Treatment of P. vivax requires both treatment of blood stages (with chloroquine or ACT) as well as clearance of liver forms with primaquine.
Disability-adjusted life yearfor malaria per 100,000 inhabitants in 2004.
Severe malaria can progress extremely rapidly and cause death within hours or days. In the most severe cases of the disease, fatality rates can reach 20%, even with intensive care and treatment. Over the longer term, developmental impairments have been documented in children who have suffered episodes of severe malaria. It causes widespread anemia during a period of rapid brain development and also direct brain damage. This neurologic damage results from cerebral malaria to which children are more vulnerable. When properly treated, people with malaria can usually expect a complete recovery.
Map showing the distribution of malaria in the world. : ♦ Elevated occurrence of chloroquine- or multi-resistant malaria : ♦ Occurrence of chloroquine-resistant malaria : ♦ No plasmodium falciparum or chloroquine-resistance : ♦ No malaria
Based on documented cases, the WHO estimates that there were 216 million cases of malaria in 2010 resulting in 655,000 deaths. An estimate in The Lancet, based on a systematic analysis of all available mortality data combined with empirical methods for estimating causes of death, places the number of deaths in 2010 at 1.24 million. The majority of cases occur in children under five years old; pregnant women are also especially vulnerable. Despite efforts to reduce transmission and increase treatment, there has been little change in which areas are at risk of this disease since 1992. Indeed, if the prevalence of malaria stays on its present upwards course, the death rate could double in the next twenty years. Precise statistics are unknown because many cases occur in rural areas where people do not have access to hospitals or the means to afford health care. As a consequence, the majority of cases are undocumented.
Although coinfection with HIV and malaria does increase mortality, this is less of a problem than with HIV/tuberculosis coinfection, due to the two diseases usually attacking different age ranges, with malaria being most common in the young and active tuberculosis most common in the old. Although HIV/malaria coinfection produces less severe symptoms than the interaction between HIV and TB, HIV and malaria do contribute to each other’s spread. This effect comes from malaria increasing viral load and HIV infection increasing a person’s susceptibility to malaria infection.
Malaria is presently endemic in a broad band around the equator, in areas of the Americas, many parts of Asia, and much of Africa; however, it is in sub-Saharan Africa where 85–90% of malaria fatalities occur. The geographic distribution of malaria within large regions is complex, and malaria-afflicted and malaria-free areas are often found close to each other. Malaria is prevalent in tropical regions because of the significant amounts of rainfall, consistent high temperatures and high humidity, along with stagnant waters in which mosquito larvae readily mature, providing them with the environment they need for continuous breeding. In drier areas, outbreaks of malaria have been predicted with reasonable accuracy by mapping rainfall. Malaria is more common in rural areas than in cities; this is in contrast to dengue fever where urban areas present the greater risk. For example, several cities in Vietnam, Laos and Cambodia are essentially malaria-free, but the disease is present in many rural regions. By contrast, malaria in Africa is present in both rural and urban areas, though the risk is lower in the larger cities. The Wellcome Trust, UK, has funded the Malaria Atlas Project to map global endemic levels of malaria, providing a more contemporary and robust means with which to assess current and future malaria disease burden. This effort led to the publication of a map of P. falciparum endemicity in 2010. As of 2010, countries with the highest death rate per 100,000 population are Cote d’Ivoire with (86.15), Angola (56.93) and Burkina Faso (50.66) – all in Africa.
Main article: History of malaria
Malaria has infected humans for over 50,000 years, and Plasmodium may have been a human pathogen for the entire history of the species. Close relatives of the human malaria parasites remain common in chimpanzees. Some new evidence suggests that the most virulent strain of human malaria may have originated in gorillas.
References to the unique periodic fevers of malaria are found throughout recorded history, beginning in 2700 BC in China. Malaria may have contributed to the decline of the Roman Empire, and was so pervasive in Rome that it was known as the “Roman fever”. Several regions in ancient Rome were considered at-risk for the disease because of the favorable conditions present for malaria vectors. This included areas such as southern Italy, the island of Sardinia, the Pontine Marshes, the lower regions of coastal Etruria and the city of Rome along the Tiber River. The presence of stagnant water in these places was preferred by mosquitoes for breeding grounds. Irrigated gardens, swamp-like grounds, runoff from agriculture, and drainage problems from road construction led to the increase of standing water.
The term malaria originates from Medieval Italian: mala aria — “bad air”; the disease was formerly called ague or marsh fever due to its association with swamps and marshland. Malaria was once common in most of Europe and North America, where it is no longer endemic, though imported cases do occur.
British doctor Ronald Ross received the Nobel Prize for Physiology or Medicine in 1902 for his work on malaria.
Malaria was the most important health hazard encountered by U.S. troops in the South Pacific during World War II, where about 500,000 men were infected. According to Joseph Patrick Byrne, “Sixty thousand American soldiers died of malaria during the African and South Pacific campaigns.” Scientific studies on malaria made their first significant advance in 1880, when a French army doctor working in the military hospital of Constantine in Algeria named Charles Louis Alphonse Laveran observed parasites for the first time, inside the red blood cells of people suffering from malaria. He therefore proposed that malaria is caused by this organism, the first time a protist was identified as causing disease. For this and later discoveries, he was awarded the 1907 Nobel Prize for Physiology or Medicine. The malarial parasite was called Plasmodium by the Italian scientists Ettore Marchiafava and Angelo Celli. A year later, Carlos Finlay, a Cuban doctor treating people with yellow fever in Havana, provided strong evidence that mosquitoes were transmitting disease to and from humans. This work followed earlier suggestions by Josiah C. Nott, and work by Sir Patrick Manson, the “father of tropical medicine”, on the transmission of filariasis.
In April 1894, a Scottish physician Sir Ronald Ross visited Sir Patrick Manson at his house on Queen Anne Street, London. This visit was the start of four years of collaboration and fervent research that culminated in 1898 when Ross, who was working in the Presidency General Hospital in Calcutta, proved the complete life-cycle of the malaria parasite in mosquitoes. He thus proved that the mosquito was the vector for malaria in humans by showing that certain mosquito species transmit malaria to birds. He isolated malaria parasites from the salivary glands of mosquitoes that had fed on infected birds. For this work, Ross received the 1902 Nobel Prize in Medicine. After resigning from the Indian Medical Service, Ross worked at the newly established Liverpool School of Tropical Medicine and directed malaria-control efforts in Egypt, Panama, Greece and Mauritius. The findings of Finlay and Ross were later confirmed by a medical board headed by Walter Reed in 1900. Its recommendations were implemented by William C. Gorgas in the health measures undertaken during construction of the Panama Canal. This public-health work saved the lives of thousands of workers and helped develop the methods used in future public-health campaigns against the disease.
The first effective treatment for malaria came from the bark of cinchona tree, which contains quinine. This tree grows on the slopes of the Andes, mainly in Peru. The indigenous peoples of Peru made a tincture of cinchona to control malaria. The Jesuits noted the efficacy of the practice and introduced the treatment to Europe during the 1640s, where it was rapidly accepted. It was not until 1820 that the active ingredient, quinine, was extracted from the bark, isolated and named by the French chemists Pierre Joseph Pelletier and Joseph Bienaimé Caventou. Quinine become the predominant malarial medication until the 1920s, when other medications began to be developed. In the 1940s, chloroquine replaced quinine as the treatment of both uncomplicated and severe falciparum malaria until resistance supervened, first in Southeast Asia and South America in the 1950s and then globally in the 1980s. Artemisinins, discovered by Chinese scientists in the 1970s, are now the recommended treatment for falciparum malaria, administered in combination with other antimalarials as well as in severe disease.
Society and culture
Malaria is not just a disease commonly associated with poverty but also a cause of poverty and a major hindrance to economic development. Tropical regions are affected most; however, malaria’s furthest extent reaches into some temperate zones with extreme seasonal changes. The disease has been associated with major negative economic effects on regions where it is widespread. During the late 19th and early 20th centuries, it was a major factor in the slow economic development of the American southern states. A comparison of average per capita GDP in 1995, adjusted for parity of purchasing power, between countries with malaria and countries without malaria gives a fivefold difference ($1,526 USD versus $8,268 USD). In countries where malaria is common, average per capita GDP has risen (between 1965 and 1990) only 0.4% per year, compared to 2.4% per year in other countries.
Poverty is both a cause and effect of malaria, since the poor do not have the financial capacities to prevent or treat the disease. In its entirety, the economic impact of malaria has been estimated to cost Africa $12 billion USD every year. The economic impact includes costs of health care, working days lost due to sickness, days lost in education, decreased productivity due to brain damage from cerebral malaria, and loss of investment and tourism. In some countries with a heavy malaria burden, the disease may account for as much as 40% of public health expenditure, 30–50% of admissions to hospital, and up to 50% of outpatient visits. The slow demographic transition in Africa may be partly attributed to malaria. Total fertility rates were best explained by child mortality, as measured indirectly by infant mortality, in a 2007 study.
A study on the effect of malaria on IQ in a sample of Mexicans found that exposure during the birth year to malaria eradication was associated with increases in IQ. It also increased the probability of employment in a skilled occupation. The author suggests that this may be one explanation for the Flynn effect and that this may be an important explanation for the link between national malaria burden and economic development. The cognitive abilities and school performance are impaired in sub-groups of people (with either cerebral malaria or uncomplicated malaria) when compared with healthy controls. Studies comparing cognitive functions before and after treatment for acute malarial illness continued to show significantly impaired school performance and cognitive abilities even after recovery. Malaria prophylaxis was shown to improve cognitive function and school performance in clinical trials when compared to placebo groups. April 25 is World Malaria Day.
Counterfeit and substandard drugs
Sophisticated counterfeits have been found in several Asian countries such as Cambodia, China, Indonesia, Laos, Thailand, and Vietnam, and are an important cause of avoidable death in those countries. The WHO said that studies indicate that up to 40% of artesunate based malaria medications are counterfeit, especially in the Greater Mekong region and have established a rapid alert system to enable information about counterfeit drugs to be rapidly reported to the relevant authorities in participating countries. There is no reliable way for doctors or lay people to detect counterfeit drugs without help from a laboratory. Companies are attempting to combat the persistence of counterfeit drugs by using new technology to provide security from source to distribution.
Another clinical and public health concern is the proliferation of substandard antimalarial medicines resulting from inappropriate concentration of ingredients, contamination with other drugs or toxic impurities, poor quality ingredients, poor stability and inadequate packaging. A 2012 study demonstrated that roughly one-third of antimalarial medications in Southeast Asia and Sub-Saharan Africa failed chemical analysis, packaging analysis, or were falsified.
Throughout history, the contraction of malaria (via natural outbreaks as well as via infliction of the disease as a biological warfare agent) has played a prominent role in the fortunes of government rulers, nation-states, military personnel, and military actions. “Malaria Site: History of Malaria During Wars” addresses the devastating impact of malaria in numerous well-known conflicts, beginning in June 323 B.C. That site’s authors note: “Many great warriors succumbed to malaria after returning from the warfront and advance of armies into continents was prevented by malaria. In many conflicts, more troops were killed by malaria than in combat.” The Centers for Disease Control (“CDC”) traces the history of malaria and its impacts farther back, to 2700 BCE.
In 1910, Nobel Prize in Medicine-winner Ronald Ross (himself a malaria survivor), published a book titled The Prevention of Malaria that included a chapter titled “The Prevention of Malaria in War.” The chapter’s author, Colonel C. H. Melville, Professor of Hygiene at Royal Army Medical College in London, addressed the prominent role that malaria has historically played during wars and advised: “A specially selected medical officer should be placed in charge of these operations with executive and disciplinary powers […].”
Significant financial investments have been made to procure existing and create new anti-malarial agents. During World War I and World War II, the supplies of the natural anti-malaria drugs, cinchona bark and quinine, proved to be inadequate to supply military personnel and substantial funding was funneled into research and development of other drugs and vaccines. American military organizations conducting such research initiatives include the Navy Medical Research Center, Walter Reed Army Institute of Research, and the U.S. Army Medical Research Institute of Infectious Diseases of the US Armed Forces.
Additionally, initiatives have been founded such as Malaria Control in War Areas (MCWA), established in 1942, and its successor, the Communicable Disease Center (now known as the Centers for Disease Control) established in 1946. According to the CDC, MCWA “was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic” and, during these activities, to “train state and local health department officials in malaria control techniques and strategies.” The CDC’s Malaria Division continued that mission, successfully reducing malaria in the United States, after which the organization expanded its focus to include “prevention, surveillance, and technical support both domestically and internationally.”
Several notable attempts are being made to eliminate the parasite from sections of the world, or to eradicate it worldwide. In 2006, the organization Malaria No More set a public goal of eliminating malaria from Africa by 2015, and the organization plans to dissolve if that goal is accomplished. Several malaria vaccines are in clinical trials, which are intended to provide protection for children in endemic areas and reduce the speed of transmission of the disease. As of 2012, The Global Fund to Fight AIDS, Tuberculosis and Malaria has distributed 230 million insecticide-treated nets intended to stop mosquito-born transmission of malaria. According to director Inder Singh, the U.S.-based Clinton Foundation has significantly reduced the cost of drugs to treat malaria, and is working to further reduce the spread of the disease. Other efforts, such as the Malaria Atlas Project focus on analyzing climate and weather information required to accurately predict the spread of malaria based on the availability of habitat of malaria-carrying parasites.
Malaria has been successfully eradicated in certain areas. The Republic of Mauritius, a tropical island located in the western Indian Ocean, considered ecological connections to malaria transmission when constructing their current plan for malaria control. To prevent mosquitoes from breeding in aquatic areas, DDT is used in moderate amounts. Additionally, larvae-eating fish are placed in water sources to remove the malaria vectors before they become a threat to the human population. Obstructions are also removed from these sources to maintain water flow and reduce stagnant water. Similarly, marsh or swamp-like environments are drained and filled to diminish mosquito breeding grounds. These actions have produced positive results. The program has cut infection and death rates tremendously, and is cost effective, only requiring $1USD per head each year. This success is a clear indication that responses to adverse environmental conditions can decrease rates of disease.
With the onset of drug-resistant Plasmodium parasites, new strategies are required to combat the widespread disease. One such approach lies in the introduction of synthetic pyridoxal-amino acid adducts, which are channeled into the parasite. Thus, trapped upon phosphorylation by plasmodial PdxK (pyridoxine/pyridoxal kinase), the proliferation of Plasmodium parasites is effectively hindered by a novel compound, PT3, a cyclic pyridoxyl-tryptophan methyl ester, without harming human cells.
Malaria parasites contain apicoplasts, an organelle usually found in plants, complete with their own functioning genomes. These apicoplasts are thought to have originated through the endosymbiosis of algae and play a crucial role in various aspects of parasite metabolism, for example in fatty acid biosynthesis. As of 2003, 466 proteins have been found to be produced by apicoplasts and these are now being investigated as possible targets for novel anti-malarial drugs.
Malaria vaccines have been an elusive goal of research. The first promising studies demonstrating the potential for a malaria vaccine were performed in 1967 by immunizing mice with live, radiation-attenuated sporozoites, which provided significant protection to the mice upon subsequent injection with normal, viable sporozoites. Since the 1970s, there has been a considerable effort to develop similar vaccination strategies within humans. It was determined that an individual can be protected from a P. falciparum infection if they receive over 1,000 bites from infected yet irradiated mosquitoes.
Main article: Malaria vaccine
Immunity (or, more accurately, tolerance) does occur naturally, but only in response to repeated infection with multiple strains of malaria. A completely effective vaccine is not yet available for malaria, although several vaccines are under development. SPf66 was tested extensively in endemic areas in the 1990s, but clinical trials showed it to be insufficiently effective. Other vaccine candidates, targeting the blood-stage of the parasite’s life cycle, have also been insufficient on their own. Several potential vaccines targeting the pre-erythrocytic stage are being developed, with RTS,S showing the most promising results so far.
‘…DDT (dichlorodiphenyltrichloroethane) is an organochlorine insecticide which is a white, crystalline solid, tasteless, and almost odorless. Technical DDT has been formulated in almost every conceivable form including solutions in xylene or petroleum distillates, emulsifiable concentrates, water-wettable powders, granules, aerosols, smoke candles, and charges for vaporisers and lotions.
First synthesized in 1874, DDT’s insecticidal properties were not discovered until 1939, and it was used with great success in the second half of World War II to control malaria and typhus among civilians and troops. The Swiss chemist Paul Hermann Müller was awarded the Nobel Prize in Physiology or Medicine in 1948 “for his discovery of the high efficiency of DDT as a contact poison against several arthropods.” After the war, DDT was made available for use as an agricultural insecticide, and soon its production and use skyrocketed.
In 1962, Silent Spring by American biologist Rachel Carson was published. The book catalogued the environmental impacts of the indiscriminate spraying of DDT in the US and questioned the logic of releasing large amounts of chemicals into the environment without fully understanding their effects on ecology or human health. The book suggested that DDT and other pesticides may cause cancer and that their agricultural use was a threat to wildlife, particularly birds. Its publication was one of the signature events in the birth of the environmental movement, and resulted in a large public outcry that eventually led to DDT being banned in the US in 1972. DDT was subsequently banned for agricultural use worldwide under the Stockholm Convention, but its limited use in disease vector control continues to this day and remains controversial.
Along with the passage of the Endangered Species Act, the US ban on DDT is cited by scientists as a major factor in the comeback of the bald eagle, the national bird of the United States, from near-extinction in the contiguous US.
Properties and chemistry
DDT is similar in structure to the insecticide methoxychlor and the acaricide dicofol. It is a highly hydrophobic, nearly insoluble in water but has a good solubility in most organic solvents, fats, and oils. DDT does not occur naturally, but is produced by the reaction of chloral (CCl3CHO) with chlorobenzene (C6H5Cl) in the presence of sulfuric acid, which acts as a catalyst. Trade names that DDT has been marketed under include Anofex (Geigy Chemical Corp.), Cezarex, Chlorophenothane, Clofenotane, Dicophane, Dinocide, Gesarol (Syngenta Crop.), Guesapon, Guesarol, Gyron (Ciba-Geigy Corp. – now Novartis), Ixodex, Neocid (Reckitt & Colman, Ltd), Neocidol (Ciba-Geigy Corp. – now Novartis), and Zerdane.
Isomers and related compounds
o,p’ -DDT, a minor component in commercial DDT.
Commercial DDT is a mixture of several closely–related compounds. The major component (77%) is the p,p’ isomer which is pictured at the top of this article. The o,p’ isomer (pictured to the right) is also present in significant amounts (15%). Dichlorodiphenyldichloroethylene (DDE) and dichlorodiphenyldichloroethane (DDD) make up the balance. DDE and DDD are also the major metabolites and breakdown products in the environment. The term “total DDT” is often used to refer to the sum of all DDT related compounds (p,p’-DDT, o,p’-DDT, DDE, and DDD) in a sample.
Production and use statistics
From 1950 to 1980, DDT was extensively used in agriculture—more than 40,000 tonnes were used each year worldwide—and it has been estimated that a total of 1.8 million tonnes have been produced globally since the 1940s. In the U.S., where it was manufactured by Ciba, Montrose Chemical Company, Pennwalt and Velsicol Chemical Corporation, production peaked in 1963 at 82,000 tonnes per year. More than 600,000 tonnes (1.35 billion lbs) were applied in the U.S. before the 1972 ban. Usage peaked in 1959 at about 36,000 tonnes.
In 2009, 3314 tonnes were produced for the control of malaria and visceral leishmaniasis. India is the only country still manufacturing DDT, with China having ceased production in 2007. India is the largest consumer.
Mechanism of insecticide action
In insects it opens sodium ion channels in neurons, causing them to fire spontaneously, which leads to spasms and eventual death. Insects with certain mutations in their sodium channel gene are resistant to DDT and other similar insecticides. DDT resistance is also conferred by up-regulation of genes expressing cytochrome P450 in some insect species.
In humans, however, it may affect health through genotoxicity or endocrine disruption. See Effects on human health.
Commercial product containing 5% DDT
Commercial product (Powder box, 50 g) containing 10% DDT ; Néocide. CibaGeigy DDT ; “Destroys parasites such as fleas, lice, ants, bedbugs, cockroaches, flies, etc.. Néocide Sprinkle caches of vermin and the places where there are insects and their places of passage. Leave the powder in place as long as possible. ” “Destroy the parasites of man and his dwelling”. “Death is not instantaneous, it follows inevitably sooner or later. ” “French manufacturing” ; “harmless to humans and warm-blooded animals” “sure and lasting effect. Odorless.
First synthesized in 1874 by Othmar Zeidler, DDT’s insecticidal properties were not discovered until 1939 by the Swiss scientist Paul Hermann Müller, who was awarded the 1948 Nobel Prize in Physiology and Medicine for his efforts.
Use in the 1940s and 1950s
DDT is the best-known of several chlorine-containing pesticides used in the 1940s and 1950s. With pyrethrum in short supply, DDT was used extensively during World War II by the Allies to control the insect vectors of typhus — nearly eliminating the disease in many parts of Europe. In the South Pacific, it was sprayed aerially for malaria and dengue fever control with spectacular effects. While DDT’s chemical and insecticidal properties were important factors in these victories, advances in application equipment coupled with a high degree of organization and sufficient manpower were also crucial to the success of these programs. In 1945, it was made available to farmers as an agricultural insecticide, and it played a minor role in the final elimination of malaria in Europe and North America. By the time DDT was introduced in the U.S., the disease had already been brought under control by a variety of other means. One CDC physician involved in the United States’ DDT spraying campaign said of the effort that “we kicked a dying dog.”
In 1955, the World Health Organization commenced a program to eradicate malaria worldwide, relying largely on DDT. The program was initially highly successful, eliminating the disease in “Taiwan, much of the Caribbean, the Balkans, parts of northern Africa, the northern region of Australia, and a large swath of the South Pacific” and dramatically reducing mortality in Sri Lanka and India. However widespread agricultural use led to resistant insect populations. In many areas, early victories partially or completely reversed, and in some cases rates of transmission even increased. The program was successful in eliminating malaria only in areas with “high socio-economic status, well-organized healthcare systems, and relatively less intensive or seasonal malaria transmission”.
DDT was less effective in tropical regions due to the continuous life cycle of mosquitoes and poor infrastructure. It was not applied at all in sub-Saharan Africa due to these perceived difficulties. Mortality rates in that area never declined to the same dramatic extent, and now constitute the bulk of malarial deaths worldwide, especially following the disease’s resurgence as a result of resistance to drug treatments and the spread of the deadly malarial variant caused by Plasmodium falciparum. The goal of eradication was abandoned in 1969, and attention was focused on controlling and treating the disease. Spraying programs (especially using DDT) were curtailed due to concerns over safety and environmental effects, as well as problems in administrative, managerial and financial implementation, but mostly because mosquitoes were developing resistance to DDT. Efforts shifted from spraying to the use of bednets impregnated with insecticides and other interventions.
Silent Spring and the U.S. ban
As early as the 1940s, scientists in the U.S. had begun expressing concern over possible hazards associated with DDT, and in the 1950s the government began tightening some of the regulations governing its use. However, these early events received little attention, and it was not until 1957, when the New York Times reported an unsuccessful struggle to restrict DDT use in Nassau County, New York, that the issue came to the attention of the popular naturalist-author, Rachel Carson. William Shawn, editor of The New Yorker, urged her to write a piece on the subject, which developed into her famous book Silent Spring, published in 1962. The book argued that pesticides, including DDT, were poisoning both wildlife and the environment and were also endangering human health.
Silent Spring was a best seller, and public reaction to it launched the modern environmental movement in the United States. The year after it appeared, President Kennedy ordered his Science Advisory Committee to investigate Carson’s claims. The report the committee issued “add[ed] up to a fairly thorough-going vindication of Rachel Carson’s Silent Spring thesis,” in the words of the journal Science, and recommended a phaseout of “persistent toxic pesticides”. DDT became a prime target of the growing anti-chemical and anti-pesticide movements, and in 1967 a group of scientists and lawyers founded the Environmental Defense Fund (EDF) with the specific goal of winning a ban on DDT. Victor Yannacone, Charles Wurster, Art Cooley and others associated with inception of EDF had all witnessed bird kills or declines in bird populations and suspected that DDT was the cause. In their campaign against the chemical, EDF petitioned the government for a ban and filed a series of lawsuits. Around this time, toxicologist David Peakall was measuring DDE levels in the eggs of peregrine falcons and California condors and finding that increased levels corresponded with thinner shells.
In response to an EDF suit, the U.S. District Court of Appeals in 1971 ordered the EPA to begin the de-registration procedure for DDT. After an initial six-month review process, William Ruckelshaus, the Agency’s first Administrator rejected an immediate suspension of DDT’s registration, citing studies from the EPA’s internal staff stating that DDT was not an imminent danger to human health and wildlife. However, the findings of these staff members were criticized, as they were performed mostly by economic entomologists inherited from the United States Department of Agriculture, whom many environmentalists felt were biased towards agribusiness and tended to minimize concerns about human health and wildlife. The decision not to ban thus created public controversy.
The EPA then held seven months of hearings in 1971–1972, with scientists giving evidence both for and against the use of DDT. In the summer of 1972, Ruckelshaus announced the cancellation of most uses of DDT—an exemption allowed for public health uses under some conditions. Immediately after the cancellation was announced, both EDF and the DDT manufacturers filed suit against the EPA, with the industry seeking to overturn the ban, and EDF seeking a comprehensive ban. The cases were consolidated, and in 1973 the U.S. Court of Appeals for the District of Columbia ruled that the EPA had acted properly in banning DDT.
The U.S. DDT ban took place amidst a growing public mistrust of industry, with the Surgeon General issuing a report on smoking in 1964, the Cuyahoga River catching fire in 1969, the fiasco surrounding the use of diethylstilbestrol (DES), and the well-publicized decline in the bald eagle population.
Some uses of DDT continued under the public health exemption. For example, in June 1979, the California Department of Health Services was permitted to use DDT to suppress flea vectors of bubonic plague. DDT also continued to be produced in the US for foreign markets until as late as 1985, when over 300 tons were exported.
Restrictions on usage
In the 1970s and 1980s, agricultural use was banned in most developed countries, beginning with Hungary in 1968 then in Norway and Sweden in 1970, Germany and the United States in 1972, but not in the United Kingdom until 1984. Vector control use has not been banned, but it has been largely replaced by less persistent alternative insecticides.
The Stockholm Convention, which took effect in 2004, outlawed several persistent organic pollutants, and restricted DDT use to vector control. The Convention has been ratified by more than 170 countries and is endorsed by most environmental groups. Recognizing that total elimination in many malaria-prone countries is currently unfeasible because there are few affordable or effective alternatives, public health use is exempt from the ban pending acceptable alternatives. Malaria Foundation International states, “The outcome of the treaty is arguably better than the status quo going into the negotiations…For the first time, there is now an insecticide which is restricted to vector control only, meaning that the selection of resistant mosquitoes will be slower than before.”
Despite the worldwide ban, agricultural use continues in India North Korea, and possibly elsewhere.
Today, about 3-4,000 tonnes each year are produced for vector control. DDT is applied to the inside walls of homes to kill or repel mosquitoes. This intervention, called indoor residual spraying (IRS), greatly reduces environmental damage. It also reduces the incidence of DDT resistance. For comparison, treating 40 hectares (99 acres) of cotton during a typical U.S. growing season requires the same amount of chemical as roughly 1,700 homes.
Degradation of DDT to form DDE (by elimination of HCl, left) and DDD (by reductive dechlorination, right)
DDT is a persistent organic pollutant that is readily adsorbed to soils and sediments, which can act both as sinks and as long-term sources of exposure contributing to terrestrial organisms . Depending on conditions, its soil half life can range from 22 days to 30 years. Routes of loss and degradation include runoff, volatilization, photolysis and aerobic and anaerobic biodegradation. Due to hydrophobic properties, in aquatic ecosystems DDT and its metabolites are absorbed by aquatic organisms and adsorbed on suspended particles, leaving little DDT dissolved in the water itself. Its breakdown products and metabolites, DDE and DDD, are also highly persistent and have similar chemical and physical properties. DDT and its breakdown products are transported from warmer regions of the world to the Arctic by the phenomenon of global distillation, where they then accumulate in the region’s food web.
Because of its lipophilic properties, DDT has a high potential to bioaccumulate, especially in predatory birds. DDT, DDE, and DDD magnify through the food chain, with apex predators such as raptor birds concentrating more chemicals than other animals in the same environment. They are very lipophilic and are stored mainly in body fat. DDT and DDE are very resistant to metabolism; in humans, their half-lives are 6 and up to 10 years, respectively. In the United States, these chemicals were detected in almost all human blood samples tested by the Centers for Disease Control in 2005, though their levels have sharply declined since most uses were banned in the US. Estimated dietary intake has also declined, although FDA food tests commonly detect it.
Marine macroalgae (seaweed) help reduce soil toxicity by up to 80% within six weeks.
Effects on wildlife and eggshell thinning
DDT is toxic to a wide range of living organisms, including marine animals such as crayfish, daphnids, sea shrimp and many species of fish. It is less toxic to mammals, but may be moderately toxic to some amphibian species, especially in the larval stage. DDT, through its metabolite DDE, caused eggshell thinning and resulted in severe population declines in multiple North American and European bird of prey species. Eggshell thinning lowers the reproductive rate of certain bird species by causing egg breakage and embryo deaths. DDE related eggshell thinning is considered a major reason for the decline of the bald eagle, brown pelican, peregrine falcon, and osprey. However, different groups of birds vary greatly in their sensitivity to these chemicals.  Birds of prey, waterfowl, and song birds are more susceptible to eggshell thinning than chickens and related species, and DDE appears to be more potent than DDT. Even in 2010, more than forty years after the U.S. ban, California condors which feed on sea lions at Big Sur which in turn feed in the Palos Verdes Shelf area of the Montrose Chemical Superfund site seemed to be having continued thin-shell problems. Scientists with the Ventana Wildlife Society and others are intensifying studies and remediations of the condors’ problems.
The biological thinning mechanism is not entirely known, but there is strong evidence that p,p’-DDE inhibits calcium ATPase in the membrane of the shell gland and reduces the transport of calcium carbonate from blood into the eggshell gland. This results in a dose-dependent thickness reduction. There is also evidence that o,p’-DDT disrupts female reproductive tract development, impairing eggshell quality later. Multiple mechanisms may be at work, or different mechanisms may operate in different species. Some studies show that although DDE levels have fallen dramatically, eggshell thickness remains 10–12 percent thinner than before DDT was first used.
Effects on human health
Potential mechanisms of action on humans are genotoxicity and endocrine disruption. DDT may be directly genotoxic, but may also induce enzymes to produce other genotoxic intermediates and DNA adducts. It is an endocrine disruptor; The DDT metabolite DDE acts as an antiandrogen (but not as an estrogen). p,p’-DDT, DDT’s main component, has little or no androgenic or estrogenic activity. Minor component o,p’-DDT has weak estrogenic activity.
DDT is classified as “moderately toxic” by the United States National Toxicology Program (NTP) and “moderately hazardous” by the World Health Organization (WHO), based on the rat oral LD50 of 113 mg/kg. DDT has on rare occasions been administered orally as a treatment for barbiturate poisoning.
DDT and DDE have been linked to diabetes. A number of studies from the US, Canada, and Sweden have found that the prevalence of the disease in a population increases with serum DDT or DDE levels.
DDT and DDE, like other organochlorines, have been shown to have xenoestrogenic activity, meaning they are chemically similar enough to estrogens to trigger hormonal responses in animals. This endocrine disrupting activity has been observed in mice and rat toxicological studies, and available epidemiological evidence indicates that these effects may be occurring in humans as a result of DDT exposure. The US Environmental Protection Agency states that DDT exposure damages the reproductive system and reduces reproductive success. These effects may cause developmental and reproductive toxicity:
- A review article in The Lancet states, “research has shown that exposure to DDT at amounts that would be needed in malaria control might cause preterm birth and early weaning … toxicological evidence shows endocrine-disrupting properties; human data also indicate possible disruption in semen quality, menstruation, gestational length, and duration of lactation.”
- Human epidemiological studies suggest that exposure is a risk factor for premature birth and low birth weight, and may harm a mother’s ability to breast feed. Some 21st-century researchers argue that these effects may increase infant deaths, offsetting any anti-malarial benefits. A 2008 study, however, failed to confirm the association between exposure and difficulty breastfeeding.
- Several recent studies demonstrate a link between in utero exposure to DDT or DDE and developmental neurotoxicity in humans. For example, a 2006 University of California, Berkeley study suggests that children exposed while in the womb have a greater chance of development problems, and other studies have found that even low levels of DDT or DDE in umbilical cord serum at birth are associated with decreased attention at infancy and decreased cognitive skills at 4 years of age. Similarly, Mexican researchers have linked first trimester DDE exposure to retarded psychomotor development.
- Other studies document decreases in semen quality among men with high exposures (generally from IRS).
- Studies generally find that high blood DDT or DDE levels do not increase time to pregnancy (TTP.) There is some evidence that the daughters of highly exposed women may have more difficulty getting pregnant (i.e. increased TTP).
- DDT is associated with early pregnancy loss, a type of miscarriage. A prospective cohort study of Chinese textile workers found “a positive, monotonic, exposure-response association between preconception serum total DDT and the risk of subsequent early pregnancy losses.” The median serum DDE level of study group was lower than that typically observed in women living in homes sprayed with DDT.
- A Japanese study of congenital hypothyroidism concluded that in utero DDT exposure may affect thyroid hormone levels and “play an important role in the incidence and/or causation of cretinism.” Other studies have also found the DDT or DDE interfere with proper thyroid function.
Occupational exposure in agriculture and malaria control has been linked to neurological problems (i.e. Parkinsons) and asthma.
DDT is suspected to cause cancer. The NTP classifies it as “reasonably anticipated to be a carcinogen,” the International Agency for Research on Cancer classifies it as a “possible” human carcinogen, and the EPA classifies DDT, DDE, and DDD as class B2 “probable” carcinogens. These evaluations are based mainly on the results of animal studies.
There is evidence from epidemiological studies (i.e. studies in human populations) that indicates that DDT causes cancers of the liver, pancreas and breast. There is mixed evidence that it contributes to leukemia, lymphoma and testicular cancer. Other epidemiological studies suggest that DDT/DDE does not cause multiple myeloma, or cancers of the prostate, endometrium, rectum, lung, bladder, or stomach.
The question of whether DDT or DDE are risk factors of breast cancer has been repeatedly studied. While individual studies conflict, the most recent reviews of all the evidence conclude that pre-puberty exposure increases the risk of subsequent breast cancer. Until recently, almost all studies measured DDT or DDE blood levels at the time of breast cancer diagnosis or after. This study design has been criticized, since the levels at diagnosis do not necessarily correspond to levels when the cancer started. Taken as a whole such studies “do not support the hypothesis that exposure to DDT is an important risk factor for breast cancer.” The studies of this design have been extensively reviewed.
In contrast, a study published in 2007 strongly associated early exposure (the p,p’- isomer) and breast cancer later in life. Unlike previous studies, this prospective cohort study collected blood samples from young mothers in the 1960s while DDT was still in use, and their breast cancer status was then monitored over the years. In addition to suggesting that the p,p’- isomer is the more significant risk factor, the study also suggests that the timing of exposure is critical. For the subset of women born more than 14 years before agricultural use, there was no association between DDT and breast cancer. However, for younger women—exposed earlier in life—the third who were exposed most to p,p’-DDT had a fivefold increase in breast cancer incidence over the least exposed third, after correcting for the protective effect of o,p’-DDT. These results are supported by animal studies.
Use against malaria
Malaria remains a major public health challenge in many countries. 2008 WHO estimates were 243 million cases, and 863,000 deaths. About 89% of these deaths occur in Africa, and mostly to children under the age of 5. DDT is one of many tools that public health officials use to fight the disease. Its use in this context has been called everything from a “miracle weapon [that is] like Kryptonite to the mosquitoes,” to “toxic colonialism.”
Before DDT, eliminating mosquito breeding grounds by drainage or poisoning with Paris green or pyrethrum was sometimes successful in fighting malaria. In parts of the world with rising living standards, the elimination of malaria was often a collateral benefit of the introduction of window screens and improved sanitation. Today, a variety of usually simultaneous interventions is the norm. These include antimalarial drugs to prevent or treat infection; improvements in public health infrastructure to quickly diagnose, sequester, and treat infected individuals; bednets and other methods intended to keep mosquitoes from biting humans; and vector control strategies such as larvaciding with insecticides, ecological controls such as draining mosquito breeding grounds or introducing fish to eat larvae, and indoor residual spraying with insecticides, possibly including DDT. IRS involves the treatment of all interior walls and ceilings with insecticides, and is particularly effective against mosquitoes, since many species rest on an indoor wall before or after feeding. DDT is one of 12 WHO–approved IRS insecticides. How much of a role DDT should play in this mix of strategies is still controversial.
WHO’s anti-malaria campaign of the 1950s and 1960s relied heavily on DDT and the results were promising, though temporary. Experts tie the resurgence of malaria to multiple factors, including poor leadership, management and funding of malaria control programs; poverty; civil unrest; and increased irrigation. The evolution of resistance to first-generation drugs (e.g. chloroquine) and to insecticides exacerbated the situation. Resistance was largely fueled by often unrestricted agricultural use. Resistance and the harm both to humans and the environment led many governments to restrict or curtail the use of DDT in vector control as well as agriculture.
Once the mainstay of anti-malaria campaigns, as of 2008 only 12 countries used DDT, including India and some southern African states, though the number is expected to rise.
Effectiveness of DDT against malaria
When it was first introduced in World War II, DDT was very effective in reducing malaria morbidity and mortality. The WHO’s anti-malaria campaign, which consisted mostly of spraying DDT, was initially very successful as well. For example, in Sri Lanka, the program reduced cases from about 3 million per year before spraying to just 18 in 1963 and 29 in 1964. Thereafter the program was halted to save money and malaria rebounded to 600,000 cases in 1968 and the first quarter of 1969. The country resumed DDT vector control but the mosquitoes had acquired resistance in the interim, presumably because of continued agricultural use. The program switched to malathion, which though more expensive proved effective.
Today, DDT remains on the WHO’s list of insecticides recommended for IRS. Since the appointment of Arata Kochi as head of its anti-malaria division, WHO’s policy has shifted from recommending IRS only in areas of seasonal or episodic transmission of malaria, to also advocating it in areas of continuous, intense transmission. The WHO has reaffirmed its commitment to eventually phasing out DDT, aiming “to achieve a 30% cut in the application of DDT world-wide by 2014 and its total phase-out by the early 2020s if not sooner” while simultaneously combating malaria. The WHO plans to implement alternatives to DDT to achieve this goal.
South Africa is one country that continues to use DDT under WHO guidelines. In 1996, the country switched to alternative insecticides and malaria incidence increased dramatically. Returning to DDT and introducing new drugs brought malaria back under control. According to DDT advocate Donald Roberts, malaria cases increased in South America after countries in that continent stopped using DDT. Research data shows a significantly strong negative relationship between DDT residual house sprayings and malaria rates. In a research from 1993 to 1995, Ecuador increased its use of DDT and resulted in a 61% reduction in malaria rates, while each of the other countries that gradually decreased its DDT use had large increase in malaria rates.
Resistance has greatly reduced DDT’s effectiveness. WHO guidelines require that absence of resistance must be confirmed before using the chemical. Resistance is largely due to agricultural use, in much greater quantities than required for disease prevention. According to one study that attempted to quantify the lives saved by banning agricultural use and thereby slowing the spread of resistance, “it can be estimated that at current rates each kilo of insecticide added to the environment will generate 105 new cases of malaria.”
Resistance was noted early in spray campaigns. Paul Russell, a former head of the Allied Anti-Malaria campaign, observed in 1956 that “resistance has appeared after six or seven years.” DDT has lost much of its effectiveness in Sri Lanka, Pakistan, Turkey and Central America, and it has largely been replaced by organophosphate or carbamate insecticides, e.g. malathion or bendiocarb.
In many parts of India, DDT has also largely lost its effectiveness. Agricultural uses were banned in 1989, and its anti-malarial use has been declining. Urban use has halted completely. Nevertheless, DDT is still manufactured and used, and one study had concluded that “DDT is still a viable insecticide in indoor residual spraying owing to its effectivity in well supervised spray operation and high excito-repellency factor.”
Studies of malaria-vector mosquitoes in KwaZulu-Natal Province, South Africa found susceptibility to 4% DDT (the WHO susceptibility standard), in 63% of the samples, compared to the average of 86.5% in the same species caught in the open. The authors concluded that “Finding DDT resistance in the vector An. arabiensis, close to the area where we previously reported pyrethroid-resistance in the vector An. funestus Giles, indicates an urgent need to develop a strategy of insecticide resistance management for the malaria control programmes of southern Africa.”
DDT can still be effective against resistant mosquitoes, and the avoidance of DDT-sprayed walls by mosquitoes is an additional benefit of the chemical. For example, a 2007 study reported that resistant mosquitoes avoided treated huts. The researchers argued that DDT was the best pesticide for use in IRS (even though it did not afford the most protection from mosquitoes out of the three test chemicals) because the others pesticides worked primarily by killing or irritating mosquitoes—encouraging the development of resistance to these agents. Others argue that the avoidance behavior slows the eradication of the disease. Unlike other insecticides such as pyrethroids, DDT requires long exposure to accumulate a lethal dose; however its irritant property shortens contact periods. “For these reasons, when comparisons have been made, better malaria control has generally been achieved with pyrethroids than with DDT.” In India, with its outdoor sleeping habits and frequent night duties, “the excito-repellent effect of DDT, often reported useful in other countries, actually promotes outdoor transmission.”
Main article: Indoor residual spraying#Residents’s opposition to IRS
For IRS to be effective, at least 80% of homes and barns in an area must be sprayed. Lower coverage rates can jeopardize program effectiveness. Many residents resist DDT spraying, objecting to the lingering smell, stains on walls, and may exacerbate problems with other insect pests. Pyrethroid insecticides (e.g. deltamethrin and lambda-cyhalothrin) can overcome some of these issues, increasing participation.
People living in areas where DDT is used for IRS have high levels of the chemical and its breakdown products in their bodies. Compared to contemporaries living where DDT is not used, South Africans living in sprayed homes have levels that are several orders of magnitude greater. Breast milk in regions where DDT is used against malaria greatly exceeds the allowable standards for breast-feeding infants. These levels are associated with neurological abnormalities in babies.
Most studies of DDT’s human health effects have been conducted in developed countries where DDT is not used and exposure is relatively low. Many experts urge that alternatives be used instead of IRS. Epidemiologist Brenda Eskenazi argues, “We know DDT can save lives by repelling and killing disease-spreading mosquitoes. But evidence suggests that people living in areas where DDT is used are exposed to very high levels of the pesticide. The only published studies on health effects conducted in these populations have shown profound effects on male fertility. Clearly, more research is needed on the health of populations where indoor residual spraying is occurring, but in the meantime, DDT should really be the last resort against malaria rather than the first line of defense.”
Illegal diversion to agriculture is also a concern, as it is almost impossible to prevent, and its subsequent use on crops is uncontrolled. For example, DDT use is widespread in Indian agriculture, particularly mango production, and is reportedly used by librarians to protect books. Other examples include Ethiopia, where DDT intended for malaria control is reportedly being used in coffee production, and Ghana where it is used for fishing.” The residues in crops at levels unacceptable for export have been an important factor in recent bans in several tropical countries. Adding to this problem is a lack of skilled personnel and supervision.
Criticism of restrictions on DDT use
Critics claim that restricting DDT in vector control have caused unnecessary deaths due to malaria. Estimates range from hundreds of thousands, to millions. Robert Gwadz of the National Institutes of Health said in 2007, “The ban on DDT may have killed 20 million children.” These arguments have been dismissed as “outrageous” by former WHO scientist Socrates Litsios. May Berenbaum, University of Illinois entomologist, says, “to blame environmentalists who oppose DDT for more deaths than Hitler is worse than irresponsible.” Investigative journalist Adam Sarvana and others characterize this notion as a “myth” promoted principally by Roger Bate of the pro-DDT advocacy group Africa Fighting Malaria (AFM).
Criticisms of a DDT “ban” often specifically reference the 1972 US ban (with the erroneous implication that this constituted a worldwide ban and prohibited use of DDT in vector control). Reference is often made to Rachel Carson’s Silent Spring even though she never pushed for a ban on DDT. John Quiggin and Tim Lambert wrote, “the most striking feature of the claim against Carson is the ease with which it can be refuted.” Carson actually devoted a page of her book to considering the relationship between DDT and malaria, warning of the evolution of DDT resistance in mosquitoes and concluding:
It is more sensible in some cases to take a small amount of damage in preference to having none for a time but paying for it in the long run by losing the very means of fighting [is the advice given in Holland by Dr Briejer in his capacity as director of the Plant Protection Service]. Practical advice should be “Spray as little as you possibly can” rather than “Spray to the limit of your capacity.”
It has also been alleged that donor governments and agencies have refused to fund DDT spraying, or made aid contingent upon not using DDT. According to a report in the British Medical Journal, use of DDT in Mozambique “was stopped several decades ago, because 80% of the country’s health budget came from donor funds, and donors refused to allow the use of DDT.” Roger Bate asserts, “many countries have been coming under pressure from international health and environment agencies to give up DDT or face losing aid grants: Belize and Bolivia are on record admitting they gave in to pressure on this issue from [USAID].”
The United States Agency for International Development (USAID) has been the focus of much criticism. While the agency is currently funding the use of DDT in some African countries, in the past it did not. When John Stossel accused USAID of not funding DDT because it wasn’t “politically correct,” Anne Peterson, the agency’s assistant administrator for global health, replied that “I believe that the strategies we are using are as effective as spraying with DDT … So, politically correct or not, I am very confident that what we are doing is the right strategy.” USAID’s Kent R. Hill states that the agency has been misrepresented: “USAID strongly supports spraying as a preventative measure for malaria and will support the use of DDT when it is scientifically sound and warranted.” The Agency’s website states that “USAID has never had a ‘policy’ as such either ‘for’ or ‘against’ DDT for IRS. The real change in the past two years [2006/07] has been a new interest and emphasis on the use of IRS in general—with DDT or any other insecticide—as an effective malaria prevention strategy in tropical Africa.” The website further explains that in many cases alternative malaria control measures were judged to be more cost-effective that DDT spraying, and so were funded instead.
Main article: Indoor residual spraying
Advocates of increased use of DDT in IRS claim that alternative insecticides are more expensive, more toxic, or not as effective. As discussed above, susceptibility of mosquitoes to DDT varies geographically. The same is true for alternative insecticides, so its relative effectiveness varies. Toxicity and cost-effectiveness comparisons lack data. Relative insecticide costs vary by location and ease of access, the habits of the local mosquitoes, the degrees of resistance exhibited by the mosquitoes, and the habits and compliance of the population, among other factors. The choice of insecticide has little impact on the total cost of a round of spraying, since product costs are only a fraction of campaign costs. IRS coverage needs to be maintained throughout the malaria season, making DDT’s relatively long life an important cost savings.
Organophosphate and carbamate insecticides, e.g. malathion and bendiocarb, respectively, are more expensive than DDT per kilogram and are applied at roughly the same dosage. Pyrethroids such as deltamethrin are also more expensive than DDT, but are applied more sparingly (0.02-0.3 g/m2 vs 1-2 g/m2), so the net cost per house is about the same over 6 months.
Non-chemical vector control
Before DDT, malaria was successfully eradicated or curtailed in several tropical areas by removing or poisoning mosquito breeding grounds and larva habitats, for example by filling or applying oil to standing water. These methods have seen little application in Africa for more than half a century.
The relative effectiveness of IRS (with DDT or alternative insecticides) versus other malaria control techniques (e.g. bednets or prompt access to anti-malarial drugs) varies greatly and is highly dependent on local conditions.
A WHO study released in January 2008 found that mass distribution of insecticide-treated mosquito nets and artemisinin–based drugs cut malaria deaths in half in Rwanda and Ethiopia, countries with high malaria burdens. IRS with DDT did not play an important role in mortality reduction in these countries.
Vietnam has enjoyed declining malaria cases and a 97% mortaility reduction after switching in 1991 from a poorly funded DDT-based campaign to a program based on prompt treatment, bednets, and pyrethroid group insecticides.
In Mexico, effective and affordable chemical and non-chemical strategies against malaria have been so successful that the Mexican DDT manufacturing plant ceased production due to lack of demand.
While the increased numbers of malaria victims since DDT usage collapsed document its value, many other factors contributed to the rise in cases.
A review of fourteen studies on the subject in sub-Saharan Africa, covering insecticide-treated nets, residual spraying, chemoprophylaxis for children, chemoprophylaxis or intermittent treatment for pregnant women, a hypothetical vaccine, and changing front–line drug treatment, found decision making limited by the gross lack of information on the costs and effects of many interventions, the very small number of cost-effectiveness analyses available, the lack of evidence on the costs and effects of packages of measures, and the problems in generalizing or comparing studies that relate to specific settings and use different methodologies and outcome measures. The two cost-effectiveness estimates of DDT residual spraying examined were not found to provide an accurate estimate of the cost-effectiveness of DDT spraying; furthermore, the resulting estimates may not be good predictors of cost-effectiveness in current programs.
However, a study in Thailand found the cost per malaria case prevented of DDT spraying ($1.87 US) to be 21% greater than the cost per case prevented of lambda-cyhalothrin–treated nets ($1.54 US), at very least casting some doubt on the unexamined assumption that DDT was the most cost-effective measure to use in all cases. The director of Mexico’s malaria control program finds similar results, declaring that it is 25% cheaper for Mexico to spray a house with synthetic pyrethroids than with DDT. However, another study in South Africa found generally lower costs for DDT spraying than for impregnated nets.
A more comprehensive approach to measuring cost-effectiveness or efficacy of malarial control would not only measure the cost in dollars of the project, as well as the number of people saved, but would also consider ecological damage and negative aspects of insecticide use on human health. One preliminary study regarding the effect of DDT found that it is likely the detriment to human health approaches or exceeds the beneficial reductions in malarial cases, except perhaps in malarial epidemic situations. It is similar to the earlier mentioned study regarding estimated theoretical infant mortality caused by DDT and subject to the criticism also mentioned earlier.
A study in the Solomon Islands found that “although impregnated bed nets cannot entirely replace DDT spraying without substantial increase in incidence, their use permits reduced DDT spraying.”
A comparison of four successful programs against malaria in Brazil, India, Eritrea, and Vietnam does not endorse any single strategy but instead states, “Common success factors included conducive country conditions, a targeted technical approach using a package of effective tools, data-driven decision-making, active leadership at all levels of government, involvement of communities, decentralized implementation and control of finances, skilled technical and managerial capacity at national and sub-national levels, hands-on technical and programmatic support from partner agencies, and sufficient and flexible financing.”
DDT resistant mosquitoes have generally proved susceptible to pyrethroids. Thus far, pyrethroid resistance in Anopheles has not been a major problem. …”
Read Full Post
| Make a Comment ( None so far )