In the American vocabulary, “welfare” has often had a limited meaning, most commonly associated in public discourse with public assistance to mothers with dependent children. Yet government welfare can also be given a broader definition, as a general social safety net designed to support citizens in need.
Under this definition, “welfare” refers to government protections for workers’ incomes, which are often threatened by structural economic change under the free market system. In an economy in which workers rely on wages to support themselves, threats to income arise due to unemployment, sickness, old age, and loss of the family breadwinner.
In the United States, then, government welfare has been a collection of different programs that includes unemployment insurance, health insurance, old age pensions, accident insurance, and support for families with dependent children. In the twentieth century, many nations in Western Europe built what became known as the “welfare state,” a comprehensive system designed to protect citizens from the hazards of an industrial, captalist economy.
Compared with the European welfare state, the American welfare system is late developing, less extensive, haphazardly constructed, and reliant upon dispersed authority. While European nations instituted programs for old-age pensions and accident insurance near the turn of the twentieth century, the United States did not develop significant welfare programs until the 1930s under Franklin D. Roosevelt’s New Deal. Unlike the European welfare state, the American welfare system has never included universal health insurance or guaranteed family incomes. Significant groups of Americans in need have not been covered by government welfare programs. Moreover, the American old-age pension system is based on worker contributions, and thus does little to redistibute wealth.
While the European welfare state was consolidated in the coherent programs of social-democratic or labor parties, the American welfare system has lacked a comprehensive structure. It was initially built as a response to emergency, during the economic crisis of the Great Depression. The American welfare system is characterized by dispersed authority. Unlike the nationalized European systems, responsibility for welfare has been shared by federal, state, and local governments, which has often led to wide disparities in welfare eligibility and benefits in different regions of the country.
Throughout its history, the American distribution of government welfare has been closely connected to cultural attitudes toward the poor. Americans have commonly distinguished between the deserving poor, who become needy through no fault of their own and are entitled to public assistance, and the undeserving poor, who are responsible for their own plight and who could escape poverty by developing a strong work ethic. Separating the deserving poor from the undeserving has often proved difficult. Nevertheless, for much of American history, many needy people have been seen as undeserving of public assistance. Because of a deeply held cultural belief in the “American dream,” which holds that anyone can achieve economic advancement through hard work, Americans have characteristically attributed poverty to the moral failings of individuals.
In the American welfare system, the distinction between the deserving and the undeserving poor has translated into a division between social insurance and public assistance programs. Social insurance, which includes old age pensions and unemployment insurance, has been available on a universal basis to those who earn it through work. Public assistance, such as aid to dependent children and general assistance for the very needy, is targeted at the poor and requires financial and moral evaluations for applicants to prove their worthiness for aid.
The benefits of public assistance are typically less generous than those of social insurance. Recipients of public assistance have often been seen as undeserving of aid because they are not seen as having earned it through work. Public assistance has thus carried a social stigma . There is also a gender and racial dimension to the devaluation of public assistance in comparison to social insurance, as recipients of the former are disproportionately female and minority.
Welfare from the Colonial Period to the Progressive Era
Treatment of the poor in colonial America was based on the principles set forth in the Elizabethan poor law of 1601. According to this English law, each town or parish was responsible for the care of its own needy. The law distinguished between three categories of the poor: those who were unable to work due to sickness or age, who were to be given material aid; the able-bodied who were unable to find jobs, who were to be provided with work; and the able-bodied but unwilling to work, who were to be instilled with the work ethic. The two important legacies of this law were its stipulation that poor relief is a local responsibility and the burden that it placed on the needy to prove their worthiness for relief.
Operating on the principles of the Elizabethan poor law, American colonial governments took responsibility for providing for the needy in their localities, through so-called “outdoor relief”—material assistance granted on a case-by-case basis. Localities also auctioned off destitute persons to the lowest bidder , who would receive funds in exchange for caring for them. However, because they were seen as drains on government funds, strangers in need were often warned away from towns, even if they were sick or disabled.
Beginning in the late eighteenth century, however, increasing urbanization, immigration, population growth, and unemployment led to a rising poor population and the need for a more systematic approach to welfare. Although outdoor relief continued to be practiced, states and municipalities supported “indoor relief” by building institutions to provide for the permanently poor and to instill the able-bodied with habits of work discipline.
In general, poorhouses were inadequately funded. Moreover, they were often poorly administered, and those who ran them were often corrupt. They lumped together different classes of poor in the same institution: the old, the sick, and the mentally ill were housed with the able-bodied unemployed. Under such circumstances, poorhouses were unable to provide adequate care for the needy or instill work habits in the able-bodied. In part, poorhouses were meant to be unpleasant institutions, as the threat of having to live in the poorhouse was intended to deter the poor from idleness. By the beginning of the twentieth century, most poorhouses were transformed into homes for the old-aged who had no one else to care for them.
By the end of the nineteenth century, many European nations were beginning to build a welfare state. A number of American reformers, believing that government welfare would have to be altered to reflect the new hazards of an industrial economy, sought to emulate the European example. While these reformers failed in their efforts to develop European-style provisions for old-age pensions and unemployment insurance, the Progressive Era (1900–1921) did see the early growth of the American welfare system. For example, from 1911 to 1921, forty-two states introduced workmen’s compensation legislation, which provided accident insurance to protect workers against job-related injuries.
In the Progressive Era, a powerful network of progressive middle-class women lobbied for mothers’ pensions, and thirty-nine states developed mothers’ aid programs from 1911 to 1921. Under these programs, states gave money to single mothers to help them defray the costs of raising their children in their own homes. The aid was meant to deter the use of child labor to help raise money for the family and to prevent the institutionalization of poor and fatherless children in orphanages, a common practice in the nineteenth century. However, in order to receive this aid, women had to prove that they were fit mothers with suitable homes. Often, the benefits given were inadequate, and the programs only reached a small portion of those in need—in 1931, only 93,620 of 1.5 million female-headed families received mothers’ aid.
Progressives had the most success in instituting programs whose goal was protecting children. In 1912, the federal government established the U.S. Children’s Bureau to gather information on the treatment of the nation’s children. In 1921, Congress passed the Sheppard-Towner Act, giving matching funds to states to build maternal and child health facilities to fight infant mortality. Despite their accomplishments, Progressives failed to develop an extensive American welfare system—that task was not accomplished until the New Deal.
The New Deal and the Establishment of the American Welfare System
The severity of the Great Depression created new demands for government relief. After the stock market crash of 24 October 1929, millions of Americans lost their jobs and found themselves without adequate means of financial support. Between 1929 and the summer of 1932, the unemployment rate skyrocketed from 3.2 percent to 24.9 percent. In the face of this economic crisis, President Herbert Hoover stressed that relief for the needy should be the responsibility of private, local, and state relief agencies. Yet the need for assistance was staggering and could not be met by the institutions Americans had traditionally relied upon to provide public aid. In 1932, Congress established the Reconstruction Finance Corporation, which was authorized to lend $300 million in relief funds directly to the states. However, the true expansion of the American welfare system came during the presidency of Franklin Roosevelt, who took office in 1933. For the first time, the federal government committed itself to providing economic security for its citizens. By the end of the 1930s, the United States had become a world leader in social spending.
The first measures that Roosevelt took were temporary ones to relieve the immediate problems caused by the depression, though in doing so he became the first president to assert that the federal government should be responsible for the welfare of its citizens. In 1933, he appointed a dynamic administrator, Harry Hopkins, to lead government relief efforts and established the Federal Emergency Relief Administration (FERA). FERA provided funds to the states for the needy, both in the form of direct cash grants and on a matching basis. For the most part, the funds were distributed by the states with federal supervision. Work projects to provide jobs to the unemployed were administered by FERA, as well as the Civil Works Administration (CWA) and the Civilian Conservation Corps (CCC)—both created in 1933. By February of 1934, FERA, the CWA, and the CCC combined reached 28 million people, 20 percent of the American population.
The economic crisis provided an opportunity for liberals to pass European-style social welfare legislation that they had unsuccessfully advocated for years. In 1935, Congress passed Roosevelt’s Social Security Act. This bill was designed to establish a more permanent system for government welfare. Roosevelt hoped that an expansive program of government security would protect Americans “against the hazards and vicissitudes of life.”
In the short term, the law provided old-age assistance in the form of immediate payments for the destitute elderly. For the long term, however, the legislation established Old Age Insurance (OAI), a pension fund for American workers aged sixty-five and over. Social security, as OAI came to be called, was a fully federal program that granted standard benefits throughout the country. While there was a popular movement in favor of noncontributory old-age pensions paid for directly out of general government funds, OAI worked on a contributory basis, with workers and employers paying equal shares into the system. While workers had to contribute in order to receive social security, benefits did not correspond to the contributions that workers made in social security taxes. The New Dealers decided to make social security a contributory program in order to appease the demands of employers and because they believed that if it were a separate program with its own tax funds, it would be protected from political attack in the future.
The Social Security Act established unemployment insurance, also on a contributory basis, by providing for a cooperative federal-state program to provide payments for a set number of weeks to workers who had lost their jobs. The act also established a system of federal matching funds for the states for needy children, ADC (Aid to Dependent Children). Since each of these programs was administered by the states, payment amounts and eligibility requirements varied widely throughout the nation.
Eventually synonymous with the word “welfare,” ADC was relatively uncontroversial at the time it was established. It was a less generous program and preserved its recipients’ dignity less than OAI or unemployment insurance, however. At first, ADC only extended benefits to children, not to caregivers—when this was changed later, the program became AFCD (Aid to Families with Dependent Children). While social security was universally available to eligible workers, ADC recipients were means-tested. Since the aid was not distributed on a universal basis, ADC recipients were often stigmatized. In order to receive assistance, state officials had to certify need and worthiness of aid. Mothers had to prove that they provided a fit home for their children and that they adhered to an acceptable code of sexual conduct in order to be eligible for ADC. Until 1961, fathers of children aided under ADC had to be completely absent in order for the mothers to receive aid. The procedures that state agencies adopted to determine need often involved substantial invasions of privacy. Social workers intensely scrutinized the budgets of mothers, and some agencies conducted “midnight raids” of the women receiving aid to check for overnight male visitors—if they found one, assistance was withdrawn.
The welfare legislation of the New Deal was based on a distinction between “unemployables” and “employables.” Unemployables such as the elderly, the disabled, and dependent children and their caregivers were to receive public aid without entering the labor market. Employables, however, were to be provided with jobs. In keeping with long-held American beliefs, the architects of the New Deal believed that it was morally damaging to substitute dependence on public aid for work. Therefore, the New Deal contained massive public works programs designed to provide work relief to the unemployed.
In 1935, Congress created the Works Progress Administration (WPA). Under Harry Hopkins, the WPA administered public works projects throughout the nation and employed workers of all skill levels at prevailing local wages. From 1935 to its elimination in 1943, the WPA employed between 1.5 and 3 million Americans at any one time, making it the largest civilian employer in the nation. During that period, it constructed or repaired 600,000 miles of road, built or rebuilt 116,000 bridges, and repaired 110,000 buildings. The CCC and the Public Works Administration (PWA) also provided jobs for public works during this period.
New Deal public works programs, however, were faced with the difficult problem of trying to reconcile the need to create jobs with the need to perform useful work in an efficient manner. Moreover, they were hampered by inadequate funding from Congress and could not rely on a fully developed federal bureaucracy to administer them. The WPA was unable to provide jobs for all of those who needed them and its wages were often insufficient. The WPA provision that it could only employ one family member indicated the prevailing gender expectation that men were to be the family breadwinners. Less than 20 percent of WPA workers were female.
While many New Dealers planned to make public employment a long-term federal commitment that could expand and contract with economic need, the public works programs were eliminated in 1943, as economic growth returned and the Roosevelt administration focused its attention on the war. In addition, New Dealers failed in their attempts to establish a system of national health insurance. Thus, while the New Deal did create a national welfare system, its programs were less ambitious than what many of its planners had anticipated.
In part, the inability of the New Dealers to develop a more extensive welfare system was due to resistance among conservative Democratic congressmen from the segregated South. Many in the South who would have benefited from such programs were unable to vote. Not only were virtually all African Americans disenfranchised, many poor whites were effectively prevented from voting by high poll taxes. Southern congressmen were instrumental in attaching limits to the programs that did pass, ensuring that federal welfare would not provide an economic alternative to work for the southern black labor force. For instance, southern congressmen saw to it that OAI excluded agricultural and domestic workers—60 percent of the nation’s African Americans were in either of these categories.
Despite the broader ambitions of New Dealers themselves, the legacy of the New Deal was the two-tiered system established by the Social Security Act: a social insurance program that included old-age pensions and unemployment insurance, with benefits for workers of all social classes; and a public assistance program, ADC, targeted at the poor, that was less generous in its benefits and attached a humiliating stigma to its recipients. While the New Deal failed to establish a complete welfare state, the expansion of the American welfare system in this period was nevertheless dramatic. The amount of money the federal government spent on public aid increased from $208 million in 1932 to $4.9 billion in 1939.
From the War on Poverty to Welfare Reform
In the 1940s and 1950s, federal and state governments continued to assume the major financial and program role in providing welfare. The welfare system did not undergo significant expansion, however, until the 1960s. In 1964, Lyndon B. Johnson, acting on the plans of his predecessor, John F. Kennedy, launched the “War on Poverty.” This public campaign had the ambitious goal of defeating poverty in the United States. However, its planners believed that economic growth would solve much of the problem, and so they avoided implementing expensive and controversial measures to fight poverty such as direct income maintenance and New Deal–style public works programs. Instead, the War on Poverty focused its energies on job training and education, launching programs such as Head Start, the Job Corps, and Upward Bound.
While the programs of the War on Poverty failed to match the extravagant rhetoric of the program, the American welfare system did expand. In 1965, Congress established the Medicare and Medicaid programs to provide medical assistance for the aged and for welfare recipients, respectively. Through these programs, a quarter of Americans received some form of government-sponsored medical insurance. Food stamps became more widely available and free to the poor: while, in 1965, the food stamp program provided only $36 million in aid to 633,000 people, by 1975 it granted $4.6 billion in aid to 17.1 million recipients. President Richard Nixon was unable to get Congress to pass the Family Assistance Plan in 1972, which would have provided a guaranteed minimum income to all families. However, Congress did pass Supplemental Social Security (SSI), which established an income floor on benefits paid to the aged, blind, and disabled.
Existing programs such as social security and Aid to Families with Dependent Children experienced tremendous growth during this period. Social security payments increased in amount and reached more people, as a greater percentage of the population became elderly and lived longer. The expansion of the welfare system substantially reduced poverty during this period, particularly among the elderly. From 1959 to 1980, the percentage of the elderly below the poverty line dropped from 35 percent to 16 percent.
In 1960, the AFDC program cost less than $1 billion and reached 745,000 families. By 1971, it cost $6 billion and reached over 3 million families. The expansion of AFDC was due in part to the concentration of poverty among certain demographic groups, such as African Americans and women. Due to the mechanization of southern agriculture, many African Americans moved northward into urban areas where the unemployment rate was high because of a decrease in factory jobs. The “feminization of poverty” left many women in economic need due to an increasing divorce rate, increasing out-of-wedlock births, and increasing rates of child desertion by fathers.
The expansion of AFDC was also due to a growing “welfare rights” consciousness that encouraged those eligible to receive aid and sought to remove the social stigma associated with it. This consciousness was promoted by groups such as the National Welfare Rights Organization (NWRO) and the Office of Economic Opportunity (OEO), a War on Poverty agency charged with seeking the “maximum feasible participation of the poor” in its programs. From 1968 to 1971, the Supreme Court decided a number of cases that expanded welfare rights. It struck down state residency requirements for AFDC eligibility, eliminated the rule that the father had to be entirely absent for aid to be given, and granted legal due process to those requesting welfare.
Although social security remained a much larger program than AFDC, AFDC became more controversial. Beginning in the mid-1970s, the expansion of the AFDC program fueled fears of a growing “welfare crisis.” As inner cities suffered the effects of de-industrialization and high unemployment, poverty increasingly came to be associated with African Americans living in urban centers, who were often referred to in public discourse as an “underclass” living in a debilitating “culture of poverty.” The public image of the AFDC recipient increasingly became that of the “welfare mom”—presumed to be an unwed African American. Here, the stigma of being poor and the stigma of single motherhood were combined to create a potent racial stereo type.
A new conservative critique of welfare gained increasing prominence by the 1980s. For leading conservatives such as Charles Murray and George Gilder, liberal social policy was itself responsible for keeping people in poverty. According to this critique, welfare programs kept recipients dependent on the state for support. Conservatives advocated reducing or abolishing AFDC payments, in order to provide poor people with the necessary incentive to become self-sufficient through work.
The conservative critique of the welfare system gained strength from an increasing distrust of the federal government. Changing gender expectations also help explain the new call for AFDC recipients to earn their living through work. The demand that the needy advance through work was a familiar one, but it had generally been applied only to men. Whereas in the New Deal single mothers were considered unemployable and kept out of the labor market, by the end of the century women were assumed to be a natural part of the labor force.
President Ronald Reagan acted on the growing conservative critique by slashing government welfare programs during the 1980s. Between 1982 and 1985 total funds spent on unemployment insurance went down 6.9 percent, food stamps went down 12.6 percent, child nutrition programs were cut 27.7 percent, housing assistance 4.4 percent, and low-income energy assistance 8.3 percent. While the Reagan administration decreased the money it spent on public assistance to the poor, it increased the budget of social security. Thus, while conservatives had success in reducing public assistance programs, existing social insurance programs that reached the middle class continued to enjoy substantial political support.
In 1992, Bill Clinton was elected president with a campaign pledge to “end welfare as we know it.” However, he spent much of his energy in his first years in office in an unsuccessful attempt to extend the welfare system by providing all Americans with health insurance. After the 1994 election, a group of conservative Republicans took control of Congress and advocated the passage of welfare reform legislation. They were led by House Speaker Newt Gingrich, who pledged in his “Contract with America” to “replace the welfare state with the opportunity society.”
In 1996, Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act, designed to reduce the number of people receiving public assistance. This act repealed AFDC and replaced it with Temporary Assistance for Needy Families (TANF). Whereas AFDC had an open-ended federal commitment to provide matching funds to the states, TANF stipulated a set amount of money earmarked for parents with dependent children to be given to states by the federal government, shifting much of the responsibility for care of the needy back to the states. The act encouraged states to use a significant proportion of their funds not for cash payments but for job training, job placement, and education. The law stipulated that no family has a right to government assistance: states have no obligation to provide relief to needy families. States were given a number of incentives to cut their welfare caseloads. Under the new legislation, TANF care-givers were eligible for only five years of benefits over the course of their lives.
Those cut from the welfare rolls were expected to get a job in the private sector and support themselves with wages. However, states were under no obligation to address obstacles that many welfare recipients faced to working, such as low skills, lack of transportation, and the need for child care, though many states did choose to implement programs to address these obstacles. The jobs that were typically available for former AFDC recipients were low-wage service industry jobs that still left them below the poverty line. In 1997, median wages for workers who had left welfare were reported to be 20 percent of hourly wages for all workers.
The legislation succeeded in reducing the amount of people receiving aid for dependent children from 4.4 million at the time the law passed to 2.4 million in December 1999, though some of these reductions should be ascribed to the booming economy of the late 1990s. However, it was unclear how the system would work in more difficult economic times—for even if the need for assistance escalated, the federal government would not increase the amount of funds it granted to the states.
Amenta, Edwin. Bold Relief: Institutional Politics and the Origins of Modern American Social Policy. Princeton, N.J.: Princeton University Press, 1998.
American Social History Project. Who Built America?: Working People and the Nation’s Economy, Politics, Culture, and Society. 2ded. 2 vols. New York: Worth, 2000.
Gordon, Linda, ed. Women, the State, and Welfare. Madison: University of Wisconsin Press, 1990.
Gordon, Linda. Pitied but Not Entitled: Single Mothers and the History of Welfare, 1890–1935. New York: Free Press, 1994.
Katz, Michael B. The Undeserving Poor: From the War on Poverty to the War on Welfare. New York: Pantheon, 1989.
———. In the Shadow of the Poorhouse: A Social History of Welfare in America. Rev. ed. New York: Basic Books, 1996.
Levine, Daniel. Poverty and Society: The Growth of the American Welfare State in International Comparison. New Brunswick, N.J.: Rutgers University Press, 1988.
Patterson, James T. America’s Struggle against Poverty in the Twentieth Century. Cambridge, Mass.: Harvard University Press, 2000.
Piven, Francis Fox, and Richard A. Cloward. Regulating the Poor: The Function of Public Welfare. Updated ed. New York: Vintage, 1993.
Rodgers, Daniel T. Atlantic Crossings: Social Politics in a Progressive Age. Cambridge, Mass.: Harvard University Press, 1998.
Skocpol, Theda. Protecting Soldiers and Mothers: The Political Origins of Social Policy in the United States. Cambridge, Mass.: Harvard University Press, 1992.
Trattner, William I. From Poor Law to Welfare State: A History of Social Welfare in America. New York: Free Press, 1999.