|Industrial Metabolism: Restructuring for Sustainable Development (UNU, 1994, 376 pages)|
|Part 2: Case-studies|
|9. A historical reconstruction of carbon monoxide and methane emissions in the United States, 1880-1980|
Carbon monoxide combustion sources
Carbon moNoxide is produced by incomplete combustion of carbon-based fuels. This occurs when a carbonaceous fuel is burned in a rich mixture, i.e. in a reducing atmosphere. In recent years, by far the greatest tonnages of CO in the United States, as elsewhere, have been produced by the automobile; hence we consider this source first. Emissions depend on certain driving conditions, as shown in figures 1 and 2. In particular, uncontrolled emissions decrease with average speed. National average CO emissions for urban driving prior to the adoption of emission controls for automobiles (c. 1965) were estimated to be about 3.1 per cent by volume, or 31,000 ppm. (Hum, 1968). More recent revised (1986) EPA estimates have reduced this by 33 per cent. It is now thought that uncontrolled CO emissions from automotive vehicles amount to 2 per cent of exhaust gases by volume.
The composition of gasoline is approximately C8H18, with a molecular weight of 114. In the fuel/air mixture corresponding to average engine conditions during the pre-control era (i.e. resulting in 2 per cent CO in the exhaust), approximately 43 molecules of atmospheric nitrogen (N2), and slightly less than 12 molecules of oxygen (O2) combine with 1 "molecule" of gasoline. The exhaust mixture contains about 43 molecules of nitrogen, 9 molecules of water vapour (H2O), 6.8 molecules of CO2, and 1.2 molecules of CO, plus very small amounts of oxygen, unburned fuel, etc. Note that about 2 out of 13 of the carbon atoms in the fuel are not fully oxidized. In terms of weight, uncontrolled automotive CO emissions were equivalent to the ratio of molecular weights (1.2*28)/114 = 0.295. In other words, for each ton of gasoline consumed, nearly 0.3 tons (295 kg) of toxic CO was emitted.
Emissions of CO in 1970, from automotive vehicles, were originally estimated by EPA (1969) to be 100 teragrams, or Tg. Controls introduced in new automobiles cut motor vehicle emissions from motor vehicles by about 50 per cent per unit of fuel consumed. In absolute terms, the reduction was from 62.7 teragrams in 1970 (peak) to 52.7 teragrams in 1980 (and 48.5 in 1984). This relatively minor decrease is in sharp contrast to the 90 per cent reduction in emission that was set as a goal of air-pollution-control efforts in the late 1960s.
The slowness of the change is partly due to the fact that cars remain in use for more than a decade on average. Thus changes in the emissions characteristics of new cars are not reflected in overall fleet emissions for many years. However, it is also evident that the current automotive emissions control technology is less effective on older cars.
Motor vehicle engines are not the only combustion process where CO is generated. However, by comparison, most enclosed stationary combustion processes utilize enough excess air to reduce sharply the amount of unburned hydrocarbons and CO. Old coal-burning industrial boilers, for instance, generated apparent CO emissions in the range of 0.1 to 0.55 kg (CO) per metric ton of coal. This corresponds to an emission coefficient in the range 0.0001 to 0.00055. More recent boilers tend to be at the lower end of the range. By comparison with automobiles, however, electric power generating facilities and large industrial boilers are not significant sources of CO.
Residential uses of fuel, which are not as efficient as industrial uses, emit considerably more carbon moNoxide to the air, but still contribute much less than vehicles. EPA estimated national aggregate emissions of CO from all stationary furnaces (excluding incinerators and open fires) to be about 7.4 Tg in 1980 - up from 4.4 Tg in 1970. The increase was due apparently to temporarily decreased domestic usage of natural gas and higher domestic use of wood and oil, because of a gas shortage attributable to gas price controls that were phased out around 1980.
Carbon monoxide emissions from incineration of solid waste in 1970 were estimated by EPA to be about 6.4 Tg. This fell to 2.2 Tg in 1980, owing to phasing out of many inefficient incinerators. Specialized modern high-temperature waste-incineration plants emit very little CO. Open refuse fires, wood fires, the burning of agricultural wastes, and fires in structures are still large contributors, though they have been sharply reduced from earlier years. Unfortunately, the emission coefficient for uncontrolled fires must be regarded as somewhat uncertain.
To summarize, emissions of CO from combustion processes are primarily related to the amount of fuel burned. The production of CO is also a function of combustion efficiency: high-temperature combustion with a moderate amount of excess air yields relatively little CO (but does emit NOX). However, low-temperature fires - especially "smouldering" ones - do generate significant emissions. For this reason, it can safely be assumed that the average CO output per unit of fuel consumed was somewhat greater in the past than it is today. We justify this statement in detail later. However, the main cause was greater dependence on solid fuels (wood, then coal) for domestic and residential heating purposes.
Up until the late nineteenth century, wood was the most important type of fuel used by Americans. In 1850 it supplied over 90 per cent of fuel supplies and in 1870 it still supplied 75 per cent. The peak of physical consumption was reached with the consumption of nearly 140 million cords in the latter year. After this point, the use of wood for fuel gradually declined, reaching 2.6 per cent of total energy consumption in 1955 (Schurr and Netschert, 1960).
Wood was used preferentially as long as it was abundant, in spite of the fact that it had several disadvantages. Its preparation was relatively labour-intensive compared to coal production. Wood also has less heat value than coal; a cord of dried hardwood weighs twice as much as a ton of coal but only contains 80 per cent of its heat value. Because of its inefficient use, dependence on wood fuel contributed to energy intensiveness. In 1850 households consumed 90 per cent of all wood fuel, with 75 per cent burned for space heating in large open fireplaces that operated at low thermal efficiency. Little wood fuel was utilized to generate mechanical energy outside the transportation sector (Greenberg, 1980).
During the decades after 1850, wood was increasingly replaced as a fuel in all sectors by anthracite and then bituminous coal. Per capita consumption of fuel wood dropped from 3.51 net tons in 1850 to 2.17 tons in 1880, and then to 1.01 tons in 1900. In 1879, about 95.5 per cent of fuel wood was consumed by households and about 4.5 per cent for industrial purposes. In eastern cities such as Philadelphia, and in some western cities such as Pittsburgh with good access to coal supplies, mineral fuels began to replace wood in the 1820s. In rural areas, however, and in cities without good access to coal, wood remained the primary fuel throughout the nineteenth century. In regard to industry, railroads were the heaviest users of wood fuel in manufacturing and transportation up to about 1870. After this date, however, mineral fuels (first anthracite but later mainly bituminous coal) rapidly replaced wood. By 1880, mineral fuels, mostly bituminous coal, already constituted more than 90 per cent of locomotive fuel (Schurr and Netschert, 1960; Tarr and Koons, 1982).
For our purposes it is necessary to estimate the past emissions of CO from fuel wood and coal combustion for residential heating and cooking. Even as late as 1910, the equipment used for these purposes was very inefficient. This is easily confirmed by direct evidence. (Ashes collected in New York City were found to consist of 55 per cent carbon by weight; Hering and Greeley, 1921.) When wood or lump coal is burned in fireplaces, kitchen stoves, or domestic furnaces (as much of it was), combustion tends to be incomplete. This is because the first stage of low-temperature combustion produces mainly CO. The CO only ignites and oxidizes to CO2 in air at temperatures above 1191-1216 °F, and at concentrations above 12.5 per cent by volume.
Thus, without mechanical fuel-air mixing (or convective mixing in large furnace volumes), the production of a significant amount of unburned CO is almost inevitable. In wood stoves, CO emissions range from 0.083 to 0.37 ton per ton of wood (0.16 average), while in open fireplaces emissions range from 0.011 to 0.04 tons/ton (0.022 average). In the case of anthracite coal, burned mostly in stoves or furnaces, we assume slightly greater combustion efficiency (higher temperature), corresponding to the low end of the range of wood (0.08 tons/ton), and we assume that the proportion of wood burned in stoves or furnaces was 20 per cent in 1800, rising to 50 per cent in 1860, then declining gradually to 10 per cent in 1950. We also assume that the proportion of wood burned in stoves had risen to 30 per cent by 1980, owing to the revival of wood-burning as a source of residential heat.
The burning of trash and refuse is still a major source of CO emissions, but emission coefficients for earlier methods are difficult to estimate. Municipal batch-type incinerators (since 1945) were mostly built to a design that resulted in CO emissions of about 0.055 per cent (520-570 ppm), compared to 12 per cent CO2. Since each molecule occupies the same volume regardless of weight, this means that about 1 molecule of CO is created for every 21 molecules of CO2. Assuming that dry combustible refuse - mostly paper - has a carbon content of 39 per cent by weight (similar to cellulose), we infer that about 0.0325 tons of CO are emitted per ton of refuse burned. Unfortunately, we lack statistics on the tonnage of refuse that has been incinerated over the last century.
Emissions of CO from industrial processes
The other significant sources of past and present CO emissions (see table 4) are (or were) industrial processes, primarily in the metallurgical and petrochemical industries. In particular, the reduction of iron ore to metallic iron is a process requiring the manufacture of carbon moNoxide in large amounts. This takes place in a blast furnace; the carbon is supplied from coke (nowadays supplemented by other hydrocarbons). However, until the coking process was developed, the source of carbon for smelting was charcoal, made from wood. The resulting pig iron is a solid solution of iron carbide (Fe3C) in a matrix of iron (Fe). Further refining to pure iron or steel requires that most of this carbon be oxidized. Both smelting and refining result in the production of large quantities of carbon moNoxide.
Other metals are reduced by similar carbothermic smelting processes, also yielding CO as a by-product. In the case of aluminium, reduction takes place in an electrolytic cell, and the carbon is supplied by anodes made from petroleum coke. Again, the carbon combines with the oxygen in the alumina, yielding pure aluminium and CO.
On deeper reflection, much of the apparent complexity of the chemistry is irrelevant. Each atom of oxygen originally combined with a metal as ore is subsequently combined with an atom of carbon as CO. The amount of carbon moNoxide created in metallurgical smelting/ refining processes is thus exactly proportional to the amount of the metal that is reduced from ore. Thus, in the case of iron, hypothetical production of 50 million metric tons of pig iron (94 per cent Fe by weight) corresponds to a production of 47 million tons of pure Fe. This, in turn, implies an input of 67.14 million tons of Fe2O3, of which 20.14 million tons are oxygen. When the reduction processes are complete, it follows that exactly 35.24 million tons of CO must have been produced. The "bottom line" is that 0.75 tons of CO are produced within the blast furnace or steel furnace for each ton of iron (Fe) reduced from iron ore. (This relationship is universal, so it applies to all ores and all furnaces.) By a similar calculation, each ton of virgin aluminium generates 1.555 tons of CO. Under modern conditions, of course, most of this CO is either captured and utilized as fuel (called blast furnace gas) or it is flared. However, in earlier periods of industrial history this was not the case.
As noted above, the theoretical ratio of CO to Fe is 0.75. This corresponds to a C:Fe ratio of 0.43. That is, at least 0.43 tons of C are needed in principle to produce a ton of iron. Assuming that all this carbon is derived from coke, this would correspond to a blast furnace "coke rate" of 0.404 tons of coke per ton of pig iron (94 per cent Fe). As a matter of fact, according to the Office of Technology Assessment, the coke rate in Japan in 1976 was 0.43, as compared to 0.48 in the Federal Republic of Germany, 0.52 in France, and 0.60 in the United States and the United Kingdom.
As already noted, until the 1850s iron-making in the United States was based on charcoal. The making of charcoal is a very old process dating back at least 2,000 years, and remained relatively unchanged until the eighteenth century. In that century and into the nineteenth, its greatest use was in regard to the production of iron. Charcoal was an ideal furnance fuel because it was relatively free from sulphur or phosphorus impurities and because its ash had properties that were helpful in smelting the ore.
Essentially, the making of charcoal involved the controlled burning of wood in the location of an iron furnace. In the north-eastern United States, these furnaces were located on so-called "iron plantations" situated within large wooded areas. The availability of a flowing stream was also a necessity. In 1830, there were a number of iron plantations of 10,000 or more acres. Normally, the tasks of woodcutting and charcoal-making demanded more workers than any other task at the ironworks: it sometimes required as many as 12 colliers to keep a single furnace working.
Because dry weather was necessary, most charcoal was made during the late spring, summer, and early autumn. The process followed was to stack bundles of cord wood in 6- to 10-foot lengths in a cone with a base of about 25 feet in diameter, cover it with damp leaves and turf, and burn it for between three and ten days. No attempt was made to condense any of the by-product wood chemicals or impurities vented from the chimney during the charcoaling process.
As suggested already, early charcoal (and iron) furnaces were not thermodynamically efficient. Charcoal iron furnaces utilized huge amounts of fuel. Peter Temin has calculated that since one acre of timber provided an average of about 30 cords of wood and each cord of wood produced 40 bushels of charcoal, an acre of timber supplied 1,200 bushels of charcoal. In 1840, a ton of pig iron required 180 bushels of charcoal, so that the wood from one acre of land would supply fuel to make 61/2 tons of pig iron. Factors such as wood quality and labour quality could make a difference in production.
From the late eighteenth century up until the 1820s, there was little change in the technology of production in the charcoal iron industry. Production was largely a function of oven size, and furnaces stayed stable in shape until the 1820s. The typical blast furnace of the older type had constricted internal dimensions and a narrow top diameter, so that total height might be only three or four times the interior width. The furnaces were open at the top, permitting carbon moNoxide, heat, and smoke to escape. The furnace was charged with alternate layers of charcoal, ore, and limestone. As the air blast was applied, the ore melted at the air inlet (tuyere) of the furnace and dropped to the hearth, while the floating slag was drawn off from the top of the molten iron pool. When the charge was exhausted, the molten pig iron was run into a casting bed of sand about twice a day (Bining, 1973; Paskoff, 1983).
In the years after 1840 and especially after the Civil War, technological improvements resulted in a reduction of charcoal consumption per ton of iron smelted. New blast furnaces were built that reached a height of 40-45 feet in the 1840s, while after the Civil War they reached 65 feet. These new furnaces used higher temperature air blasts and were built with narrower internal dimensions and more vertical walls. They were located only in the immediate vicinity of ore beds and generally along canals and navigable rivers. This suggests that proximity to ore and transportation, rather than to timber for charcoal, were the critical factors in the location of charcoal iron furnaces (Schallenberg and Ault, 1977). Such furnaces had much higher rates of production than those in the antebellum years. One Michigan furnace produced a ton of iron with about 81.5 bushels of charcoal, or less than half the amount estimated to have been necessary in the 1840s (Schallenberg and Ault, 1977).
There were continuous improvements in the technology of charcoalmaking during the second half of the nineteenth century. Charcoal kilns were introduced in the 1850s and were adopted throughout the industry after the war. The kiln was a permanent shell built of masonry, brick, or sheet iron and had vents to control the rate of carbonization. The maximum production of kilns was between 45 and 50 bushels of charcoal per cord. A few such kilns had gas by-product collection pipes for the production of wood chemical distillates, but this was rare (Schallenberg and Ault, 1977). In the 1870s and 1880s, the retort method of making charcoal was adopted, with wood being carbonized by external heat. Output of these retorts was 60-65 bushels of charcoal per cord of wood. With this technology, the methanol, tar, and other volatile by-products were passed into an integral chemical plant for condensation and separation and eventual sale. These plants were highly mechanized and capital-intensive but had much lower labour costs than previous methods of charcoaling (Schallenberg and Ault, 1977). These improvements, however, took place in the context of a relatively static market for charcoal iron and a dramatic increase in the use of mineral fuels, especially bituminous coal and coke, in iron production.
By 1854, the first date for which comparative figures are available, charcoal fuel was used to produce 306,000 tons of pig iron, 303,000 tons of anthracite and coke, and 49,000 tons of bituminous coal. By 1870, charcoal was used in the production of 326,000 tons of pig iron, 830,000 tons of anthracite and coke, and 509,000 tons of bituminous coal and coke. By 1905, bituminous coal and coke had taken a dramatic lead, producing 20,965,000 tons, against 1,644,000 tons of anthracite and coke, and only 353,000 tons of charcoal.
"Best practice" charcoal-based iron furnaces built in the 1880s competing with less efficient coke-based technologies - achieved a conversion rate of about 1.0 (tons carbon/tons pig iron). This became the average for charcoalbased iron furnaces by 1919, when average coke-based iron still required 1.2 tons/ton. However, by that time, the best practice coke-based iron-making was as efficient as the remaining few charcoal furnaces. The coke rate is still regarded as a primary measure of efficiency. The overall average C:Fe ratio can be assumed to have declined almost linearly from about 1.9 in 1840 to its 1976 value of 0.6, as shown in figure 3.