A critique of what lies behind the use of data in contemporary education policy
While the science fiction tales of artificial intelligence eclipsing humanity are still very much fantasies, in Algorithms of Education the authors tell real stories of how algorithms and machines are transforming education governance, providing a fascinating discussion and critique of data and its role in education policy.
Algorithms of Education explores how, for policy makers, today’s ever-growing amount of data creates the illusion of greater control over the educational futures of students and the work of school leaders and teachers. In fact, the increased datafication of education, the authors argue, offers less and less control, as algorithms and artificial intelligence further abstract the educational experience and distance policy makers from teaching and learning. Focusing on the changing conditions for education policy and governance, Algorithms of Education proposes that schools and governments are increasingly turning to “synthetic governance”—a governance where what is human and machine becomes less clear—as a strategy for optimizing education.
Exploring case studies of data infrastructures, facial recognition, and the growing use of data science in education, Algorithms of Education draws on a wide variety of fields—from critical theory and media studies to science and technology studies and education policy studies—mapping the political and methodological directions for engaging with datafication and artificial intelligence in education governance. According to the authors, we must go beyond the debates that separate humans and machines in order to develop new strategies for, and a new politics of, education.
In this volume, specialists from traditionally separate areas in economics and finance investigate issues at the conjunction of their fields. They argue that financial decisions of the firm can affect real economic activity—and this is true for enough firms and consumers to have significant aggregate economic effects. They demonstrate that important differences—asymmetries—in access to information between "borrowers" and "lenders" ("insiders" and "outsiders") in financial transactions affect investment decisions of firms and the organization of financial markets. The original research emphasizes the role of information problems in explaining empirically important links between internal finance and investment, as well as their role in accounting for observed variations in mechanisms for corporate control.
When many individuals aggregate and no special organization is imposed, casual social groups form among monkeys in tree tops and among human beings on sidewalks, beaches, and playgrounds. Joel Cohen shows that previously existing probabilistic models do not describe the details of available data on the sizes of such casual groups. He proposes a new family of models, called linear one-step transition (LOST) models, which predict observed equilibrium group size distributions, and also describe the dynamics of systems of social groups.
For the first time, he presents recorded observations of the dynamics of group formation and dissolution among human children in free play. These observations are consistent with the dynamics assumed by the LOST models. Such models suggest generalizations that may apply to epidemiology, the sociology of rumors, and traffic control. Within biology, this approach offers ways of linking the behavior of individuals with the population ecology of a species.
This book explores the strikingly similar ways in which information is encoded in nonverbal man-made signals (e.g., traffic lights and tornado sirens) and animal-evolved signals (e.g., color patterns and vocalizations). The book also considers some coding principles for reducing certain unwanted redundancies and explains how desirable redundancies enhance communication reliability.
Jack Hailman believes this work pioneers several aspects of analyzing human and animal communication. The book is the first to survey man-made signals as a class. It is also the first to compare such human-devised systems with signaling in animals by showing the highly similar ways in which the two encode information. A third innovation is generalizing principles of quantitative information theory to apply to a broad range of signaling systems. Finally, another first is distinguishing among types of redundancy and their separation into unwanted and desirable categories.
This remarkably novel book will be of interest to a wide readership. Appealing not only to specialists in semiotics, animal behavior, psychology, and allied fields but also to general readers, it serves as an introduction to animal signaling and to an important class of human communication.
Investment banks play a critically important role in channeling capital from investors to corporations. Not only do they float and distribute new corporate securities, they also assist companies in the private placement of securities, arrange mergers and acquisitions, devise specialized financing, and provide other corporate financial services.
After sketching the history and evolution of investment banking, the authors describe the structure of the industry, focusing on the competitive forces at work within it today. They explore patterns of concentration and analyze the strategic and economic factors that underlie those patterns. The authors directly examine the pairing up of investment banks with their corporate clients. They show that the market is sharply segmented, with banks and corporate clients being matched in roughly rank order, the most prestigious banks with the largest, most powerful clients, and so on. Vigorous competition occurs within each segment, but much less between them.
With the industry now confronting a changing regulatory environment, a growing tendency of clients to arrange their own financing, and increasing competition both from within and from commercial banks and foreign institutions, Competition in the Investment Banking Industry is essential reading for anyone interested in the future of investment banking.
Albert Einstein’s theory of general relativity describes the effect of gravitation on the shape of space and the flow of time. But for more than four decades after its publication, the theory remained largely a curiosity for scientists; however accurate it seemed, Einstein’s mathematical code—represented by six interlocking equations—was one of the most difficult to crack in all of science. That is, until a twenty-nine-year-old Cambridge graduate solved the great riddle in 1963. Roy Kerr’s solution emerged coincidentally with the discovery of black holes that same year and provided fertile testing ground—at long last—for general relativity. Today, scientists routinely cite the Kerr solution, but even among specialists, few know the story of how Kerr cracked Einstein’s code.
Fulvio Melia here offers an eyewitness account of the events leading up to Kerr’s great discovery. Cracking the Einstein Code vividly describes how luminaries such as Karl Schwarzschild, David Hilbert, and Emmy Noether set the stage for the Kerr solution; how Kerr came to make his breakthrough; and how scientists such as Roger Penrose, Kip Thorne, and Stephen Hawking used the accomplishment to refine and expand modern astronomy and physics. Today more than 300 million supermassive black holes are suspected of anchoring their host galaxies across the cosmos, and the Kerr solution is what astronomers and astrophysicists use to describe much of their behavior.
By unmasking the history behind the search for a real world solution to Einstein’s field equations, Melia offers a first-hand account of an important but untold story. Sometimes dramatic, often exhilarating, but always attuned to the human element, Cracking the Einstein Code is ultimately a showcase of how important science gets done.
The tasks of macroeconomics are to interpret observations on economic aggregates in terms of the motivations and constraints of economic agents and to predict the consequences of alternative hypothetical ways of administering government economic policy. General equilibrium models form a convenient context for analyzing such alternative government policies. In the past ten years, the strengths of general equilibrium models and the corresponding deficiencies of Keynesian and monetarist models of the 1960s have induced macroeconomists to begin applying general equilibrium models.
This book describes some general equilibrium models that are dynamic, that have been built to help interpret time-series of observations of economic aggregates and to predict the consequences of alternative government interventions. The first part of the book describes dynamic programming, search theory, and real dynamic capital pricing models. Among the applications are stochastic optimal growth models, matching models, arbitrage pricing theories, and theories of interest rates, stock prices, and options. The remaining parts of the book are devoted to issues in monetary theory; currency-in-utility-function models, cash-in-advance models, Townsend turnpike models, and overlapping generations models are all used to study a set of common issues. By putting these models to work on concrete problems in exercises offered throughout the text, Thomas Sargent provides insights into the strengths and weaknesses of these models of money. An appendix on functional analysis shows the unity that underlies the mathematics used in disparate areas of rational expectations economics.
This book on dynamic equilibrium macroeconomics is suitable for graduate-level courses; a companion book, Exercises in Dynamic Macroeconomic Theory, provides answers to the exercises and is also available from Harvard University Press.
Edgar Allan Poe - American Writers 89 was first published in 1970. Minnesota Archive Editions uses digital technology to make long-unavailable books once again accessible, and are published unaltered from the original University of Minnesota Press editions.
John Roemer points out that there are two views of equality of opportunity that are widely held today. The first, which he calls the nondiscrimination principle, states that in the competition for positions in society, individuals should be judged only on attributes relevant to the performance of the duties of the position in question. Attributes such as race or sex should not be taken into account. The second states that society should do what it can to level the playing field among persons who compete for positions, especially during their formative years, so that all those who have the relevant potential attributes can be considered.
Common to both positions is that at some point the principle of equal opportunity holds individuals accountable for achievements of particular objectives, whether they be education, employment, health, or income. Roemer argues that there is consequently a "before" and an "after" in the notion of equality of opportunity: before the competition starts, opportunities must be equalized, by social intervention if need be; but after it begins, individuals are on their own. The different views of equal opportunity should be judged according to where they place the starting gate which separates "before" from "after." Roemer works out in a precise way how to determine the location of the starting gate in the different views.
These three elegant essays develop principles central to the understanding of the diverse ways in which imperfect information affects the distribution of resources, incentives, and the evaluation of economic policy. The first concerns the special role that information plays in the allocation process when it is possible to improve accuracy through private investment. The common practice of hiring “experts” whose information is presumably much better than their clients' is analyzed. Issues of cooperative behavior when potential group members possess diverse pieces of information are addressed. Emphasis is placed on the adaptation of the “core” concept from game theory to the resource allocation model with differential information.
The second essay deals with the extent to which agents can influence the random events they face. This is known as moral hazard, and in its presence there is a potential inefficiency in the economic system. Two special models are studied: the role of moral hazard in a monetary economy, and the role of an outside adjudicatory agency that has the power to enforce fines and compensation.
The final essay discusses the problem of certainty equivalence in economic policy. Conditions under which a full stochastic optimization can be calculated by solving a related, much simpler “certainty equivalence” problem are developed. The reduction in the complexity of calculation involved is very great compared with the potential loss of efficiency.
Essential Demographic Methods brings to readers the full range of ideas and skills of demographic analysis that lie at the core of social sciences and public health. Classroom tested over many years, filled with fresh data and examples, this approachable text is tailored to the needs of beginners, advanced students, and researchers alike. An award-winning teacher and eminent demographer, Kenneth Wachter uses themes from the individual lifecourse, history, and global change to convey the meaning of concepts such as exponential growth, cohorts and periods, lifetables, population projection, proportional hazards, parity, marity, migration flows, and stable populations. The presentation is carefully paced and accessible to readers with knowledge of high-school algebra. Each chapter contains original problem sets and worked examples.
From the most basic concepts and measures to developments in spatial demography and hazard modeling at the research frontier, Essential Demographic Methods brings out the wider appeal of demography in its connections across the sciences and humanities. It is a lively, compact guide for understanding quantitative population analysis in the social and biological world.
At a time of unprecedented expansion in the life sciences, evolution is the one theory that transcends all of biology. Any observation of a living system must ultimately be interpreted in the context of its evolution. Evolutionary change is the consequence of mutation and natural selection, which are two concepts that can be described by mathematical equations. Evolutionary Dynamics is concerned with these equations of life. In this book, Martin A. Nowak draws on the languages of biology and mathematics to outline the mathematical principles according to which life evolves. His work introduces readers to the powerful yet simple laws that govern the evolution of living systems, no matter how complicated they might seem.
Evolution has become a mathematical theory, Nowak suggests, and any idea of an evolutionary process or mechanism should be studied in the context of the mathematical equations of evolutionary dynamics. His book presents a range of analytical tools that can be used to this end: fitness landscapes, mutation matrices, genomic sequence space, random drift, quasispecies, replicators, the Prisoner’s Dilemma, games in finite and infinite populations, evolutionary graph theory, games on grids, evolutionary kaleidoscopes, fractals, and spatial chaos. Nowak then shows how evolutionary dynamics applies to critical real-world problems, including the progression of viral diseases such as AIDS, the virulence of infectious agents, the unpredictable mutations that lead to cancer, the evolution of altruism, and even the evolution of human language. His book makes a clear and compelling case for understanding every living system—and everything that arises as a consequence of living systems—in terms of evolutionary dynamics.
John G. Cragg and Burton G. Malkiel collected detailed forecasts of professional investors concerning the growth of 175 companies and use this information to examine the impact of such forecasts on the market evaluations of the companies and to test and extend traditional models of how stock market values are determined.
A formal model in the social sciences builds explanations when it structures the reasoning underlying a theoretical argument, opens venues for controlled experimentation, and can lead to hypotheses. Yet more importantly, models evaluate theory, build theory, and enhance conjectures. Formal Modeling in Social Science addresses the varied helpful roles of formal models and goes further to take up more fundamental considerations of epistemology and methodology.
The authors integrate the exposition of the epistemology and the methodology of modeling and argue that these two reinforce each other. They illustrate the process of designing an original model suited to the puzzle at hand, using multiple methods in diverse substantive areas of inquiry. The authors also emphasize the crucial, though underappreciated, role of a narrative in the progression from theory to model.
Transparency of assumptions and steps in a model means that any analyst will reach equivalent predictions whenever she replicates the argument. Hence, models enable theoretical replication, essential in the accumulation of knowledge. Formal Modeling in Social Science speaks to scholars in different career stages and disciplines and with varying expertise in modeling.
This book reports the authors' research on one of the most sophisticated general equilibrium models designed for tax policy analysis. Significantly disaggregated and incorporating the complete array of federal, state, and local taxes, the model represents the U.S. economy and tax system in a large computer package. The authors consider modifications of the tax system, including those being raised in current policy debates, such as consumption-based taxes and integration of the corporate and personal income tax systems. A counterfactual economy associated with each of these alternatives is generated, and the possible outcomes are compared.
This compact and original exposition of optimal control theory and applications is designed for graduate and advanced undergraduate students in economics. It presents a new elementary yet rigorous proof of the maximum principle and a new way of applying the principle that will enable students to solve any one-dimensional problem routinely. Its unified framework illuminates many famous economic examples and models.
This work also emphasizes the connection between optimal control theory and the classical themes of capital theory. It offers a fresh approach to fundamental questions such as: What is income? How should it be measured? What is its relation to wealth?
The book will be valuable to students who want to formulate and solve dynamic allocation problems. It will also be of interest to any economist who wants to understand results of the latest research on the relationship between comprehensive income accounting and wealth or welfare.
Economies are constantly in flux, and economists have long sought reliable means of analyzing their dynamic properties. This book provides a succinct and accessible exposition of modern dynamic (or intertemporal) macroeconomics. The authors use a microeconomics-based general equilibrium framework, specifically the overlapping generations model, which assumes that in every period there are two generations which overlap. This model allows the authors to fully describe economies over time and to employ traditional welfare analysis to judge the effects of various policies. By choosing to keep the mathematical level simple and to use the same modeling framework throughout, the authors are able to address many subtle economic issues. They analyze savings, social security systems, the determination of interest rates and asset prices for different types of assets, Ricardian equivalence, business cycles, chaos theory, investment, growth, and a variety of monetary phenomena.
Introduction to Dynamic Macroeconomic Theory will become a classic of economic exposition and a standard teaching and reference tool for intertemporal macroeconomics and the overlapping generations model. The writing is exceptionally clear. Each result is illustrated with analytical derivations, graphically, and by worked out examples. Exercises, which are strategically placed, are an integral part of the book.
Martin Shubik brings classical oligopoly theory and research in mathematical economics close to new studies in industrial organization and simple game experiments in this imaginative and important new work. He engages the reader by creating a market model and by explaining its availability as a computer program, thus promoting interest in game experiments. In all, he admirably succeeds in increasing our understanding of the meaning of competitive and cooperative behavior and of market structure.
This unusual book covers a variety of topics: economic explanation, model building, analyses of duopoly and oligopoly, product differentiation, contingent demand, demand fluctuations, the study of non-symmetric markets, and advertising. All of these parts of Shubik's overall pattern of interpretation may also be used in a game which, more or less, coincides with the exposition of theory and the subject matter of accounting. A complete linking of basic accounting items to the oligopoly model and theory is made. Shubik bridges the gap between information as it appears to the businessman—the player in the game—and the economic model and abstraction of the market as it appears to the economic theorist.
Over the last several decades, mathematical models have become central to the study of social evolution, both in biology and the social sciences. But students in these disciplines often seriously lack the tools to understand them. A primer on behavioral modeling that includes both mathematics and evolutionary theory, Mathematical Models of Social Evolution aims to make the student and professional researcher in biology and the social sciences fully conversant in the language of the field.
Teaching biological concepts from which models can be developed, Richard McElreath and Robert Boyd introduce readers to many of the typical mathematical tools that are used to analyze evolutionary models and end each chapter with a set of problems that draw upon these techniques. Mathematical Models of Social Evolution equips behaviorists and evolutionary biologists with the mathematical knowledge to truly understand the models on which their research depends. Ultimately, McElreath and Boyd’s goal is to impart the fundamental concepts that underlie modern biological understandings of the evolution of behavior so that readers will be able to more fully appreciate journal articles and scientific literature, and start building models of their own.
This book examines, in rigorous, quantitative detail, the structure of trade between Japan and the United States, tracing the evolution of trade interdependence and the causes of its increasing intensity. It also looks at sectoral differences in interdependence—at the patterns behind changes in the composition of trade and the complex factors that determine how individual sectors of each economy respond to economic change in all the others.
In the first part, the author designs and estimates a multicountry, multisectoral general equilibrium model. The model is operationalized with careful estimates of the parameters that govern demand, production, and trade in both economies. In the second part, the model is employed to explore various aspects of interdependence and commercial policy. Peter Petri's findings indicate, among other things, that the American and Japanese economies are more closely related than one might judge from the size of their trade. As a result of differences in the structures of the two economies, their interdependence is sharply asymmetric, with economic events in the United States having a greater impact on Japan than vice versa. The study also shows that the roots of bilateral conflict can be traced to structural causes, and suggests that recent structural changes may have increased the incentives for protectionism.
This volume is the latest research report from the Harvard Water Program in the series that began with Design of Water-Resource Systems and includes Simulation Techniques for Design of Water-Resource Systems and Streamflow Synthesis. The emphasis is on the systems analysis of the control of water quality in a river basin or watershed. Classical methods such as low-flow augmentation are analyzed as well as novel ones such as instream aeration and piping of effluents from their point of origin to less harmful points of discharge. Particular attention is paid to the economic evaluation of the methods studied and to the resolution of the political conflicts that are likely to arise in a situation where the costs of combating pollution are borne by different people from those who benefit from the improvement.
The main thesis is that the technical, economic, and political aspects of water quality management have to be considered together in the search for effective, economical, and politically acceptable solutions to the problems of deteriorating water quality. Some practical methods for integrating these diverse considerations in a systems analysis are presented.
In this book, John Roemer presents a unified and rigorous theory of political competition between parties. He models the theory under many specifications, including whether parties are policy oriented or oriented toward winning, whether they are certain or uncertain about voter preferences, and whether the policy space is uni- or multidimensional. He examines all eight possible combinations of these choice assumptions, and characterizes their equilibria.
He fleshes out a model in which each party is composed of three different factions concerned with winning, with policy, and with publicity. Parties compete with one another. When internal bargaining is combined with external competition, a natural equilibrium emerges, which Roemer calls party-unanimity Nash equilibrium.
Assuming only the distribution of voter preferences and the endowments of the population, he deduces the nature of the parties that will form. He then applies the theory to several empirical puzzles, including income distribution, patterns of electoral success, and why there is no labor party in the United States.
From the Republican Party's "Southern Strategy" in the U.S. to the rise of Le Pen's National Front in France, conservative politicians in the last thirty years have capitalized on voters' resentment of ethnic minorities to win votes and undermine government aid to the poor. In this book, the authors construct a theoretical model to calculate the effect of voters' attitudes about race and immigration on political parties' stances on income distribution.
Drawing on empirical data from the U.S., Britain, Denmark, and France, they use their model to show how parties choose their platforms and compete for votes. They find that the Right is able to push fiscal policies that hurt working and middle class citizens by attracting voters who may be liberal on economic issues but who hold conservative views on race or immigration. The authors estimate that if all voters held non-racist views, liberal and conservative parties alike would have proposed levels of redistribution 10 to 20 percent higher than they did. Combining historical analysis and empirical rigor with major theoretical advances, the book yields fascinating insights into how politicians exploit social issues to advance their economic agenda.
Rational Expectations and Econometric Practice was first published in 1981. Minnesota Archive Editions uses digital technology to make long-unavailable books once again accessible, and are published unaltered from the original University of Minnesota Press editions.
Assumptions about how people form expectations for the future shape the properties of any dynamic economic model. To make economic decisions in an uncertain environment people must forecast such variables as future rates of inflation, tax rates, government subsidy schemes and regulations. The doctrine of rational expectations uses standard economic methods to explain how those expectations are formed.
This work collects the papers that have made significant contributions to formulating the idea of rational expectations. Most of the papers deal with the connections between observed economic behavior and the evaluation of alternative economic policies.
Robert E. Lucas, Jr., is professor of economics at the University of Chicago. Thomas J. Sargent is professor of economics at the University of Minnesota and adviser to the Federal Reserve Bank of Minnesota.
Macroeconomics is in disarray. No one approach is dominant, and an increasing divide between theory and empirics is evident.
This book presents both a critique of mainstream macroeconomics from a structuralist perspective and an exposition of modern structuralist approaches. The fundamental assumption of structuralism is that it is impossible to understand a macroeconomy without understanding its major institutions and distributive relationships across productive sectors and social groups.
Lance Taylor focuses his critique on mainstream monetarist, new classical, new Keynesian, and growth models. He examines them from a historical perspective, tracing monetarism from its eighteenth-century roots and comparing current monetarist and new classical models with those of the post-Wicksellian, pre-Keynesian generation of macroeconomists. He contrasts the new Keynesian vision with Keynes's General Theory, and analyzes contemporary growth theories against long traditions of thought about economic development and structural change.
This book gives a practical, applications-oriented account of the latest techniques for estimating and analyzing large, nonlinear macroeconomic models. Ray Fair demonstrates the application of these techniques in a detailed presentation of several actual models, including his United States model, his multicountry model, Sargent's classical macroeconomic model, autoregressive and vector autoregressive models, and a small (twelve equation) linear structural model. He devotes a good deal of attention to the difficult and often neglected problem of moving from theoretical to econometric models. In addition, he provides an extensive discussion of optimal control techniques and methods for estimating and analyzing rational expectations models.
A computer program that handles all the techniques in the book is available from the author, making it possible to use the techniques with little additional programming. The book presents the logic of this program. A smaller program for personal microcomputers for analysis of Fair's United States model is available from Urban Systems Research & Engineering, Inc. Anyone wanting to learn how to use large macroeconomic models, including researchers, graduate students, economic forecasters, and people in business and government both in the United States and abroad, will find this an essential guidebook.
In 1965, a group of economists at Harvard University established the Project for Quantitative Research in Economic Development in the Center for International Affairs. Brought together by a common background of fieldwork in developing countries and a desire to apply modern techniques of quantitative analysis to the policy problems of these countries, they produced this volume, which represents that part of their research devoted to formulating operational ways of thinking about development problems.
The seventeen essays are organized into four sections: General Planning Models, International Trade and External Resources, Sectoral Planning, and Empirical Bases for Development Programs. They raise some central questions: To what extent can capital and labor substitute for each other? Does development require fixed inputs of engineers and other specialists in each sector or are skills highly substitutable? Is the trade gap a structural phenomenon or merely evidence of an overvalued exchange rate? To what extent do consumers respond to changes in relative prices?
Equally at home in economic theory and political philosophy, John Roemer has written a unique book that critiques economists’ conceptions of justice from a philosophical perspective and philosophical theories of distributive justice from an economic one. He unites the economist’s skill in constructing precise, axiomatic models with the philosopher’s in exploring the assumptions of those models. His synthesis will enable philosophers and economists to engage each other’s ideas more fruitfully.
Roemer first shows how economists’ understanding of the fairness of various resource allocation mechanisms can be enriched. He extends the economic theory of social choice to show how individual preferences can be aggregated into social preferences over various alternatives. He critiques the standard applications of axiomatic bargaining theory to distributive justice, showing that they ignore information on available resources and preference orderings. He puts these variables in the models, which enable him to generate resource allocation mechanisms that are more consonant with our intuitions about distributive justice. He then critiques economists’ theories of utilitarianism and examines the question of the optimal population size in a world of finite resources.
Roemer explores the major new philosophical concepts of the theory of distributive justice—primary goods, functionings and capability, responsibility in its various forms, procedural versus outcome justice, midfare—and shows how they can be sharpened and clarified with the aid of economic analysis. He critiques and extends the ideas of major contemporary theories of distributive justice, including those of Rawls, Sen, Nozick, and Dworkin. Beginning from the recent theories of Arneson and G. A. Cohen, he constructs a theory of equality of opportunity. Theories of Distributive Justice contains important and original results, and it can also be used as a graduate-level text in economics and philosophy.
Experts agree that the earth will eventually run out of certain low-cost, nonrenewable resources, possibly as early as a century from now. Will the transition to reliance on other, more abundant resources be smooth or discontinuous? Might industrial societies experience a marked decline in living standards—a radically different kind of society from the one we now know? Geologists maintain that once inexpensive high-grade resources are exhausted, economic growth will slow. Economists are more optimistic: they believe that new technologies and materials will be substituted rapidly enough to prevent minor economic dislocations.
Toward a New Iron Age? takes an important step toward reconciling these divergent views. It is the most comprehensive study of the economic consequences of resource depletion—in particular, it is a thorough exploration of the prospects for one key metal, copper. The authors draw on geological and engineering data to calculate the resources now available and to assess the feasibility of substituting alternatives. Using linear programming and a range of hypothetical base conditions, they are able to estimate the course, through the next century and beyond, of several crucial factors: the rate at which copper resources will be used and when they will be depleted; how the price of the metal will fluctuate; when alternative materials will be substituted, in what patterns, and at what costs. By the late twenty-first century, the authors believe, low-cost copper will no longer be available. Industrial societies will have to operate on more abundant resources such as iron, silica, and aluminum. They will enter, in short, a New Iron Age.
READERS
Browse our collection.
PUBLISHERS
See BiblioVault's publisher services.
STUDENT SERVICES
Files for college accessibility offices.
UChicago Accessibility Resources
home | accessibility | search | about | contact us
BiblioVault ® 2001 - 2024
The University of Chicago Press