Category Archives: Fertilization

Planting Row Configuration

10.2.4.1 Rectangular Spacing

Historically, many tree planting guides for pines recommend a square spacing. However, recent trends are in favor of a rectangular arrangement where the distance between the rows is greater than the distance between pines within the rows. In a few cases, the distance between rows is four times the distance between trees within a row. This planting configuration is especially useful in reducing establishment costs when trees are planted

image044

Average tree volume (cubic meters)

Figure 10.1 The cost of harvesting trees is a function of tree size. The actual price to harvest trees that average 0.3 m3 will depend on equipment used, terrain, fuel costs, labor costs, year of harvest, exchange rates, and so on.

with machines, on sites that have been ripped prior to planting, and in cases where herbicides are applied in bands (centered over each row of pines).

Sorghum Composition

As with any bioenergy crop, biomass composition is important to breeders, producers and end users. However, the definition of quality varies greatly depending on who is defining it and the specific method being used for processing. In addition, the specific type of sorghum, the stage of growth, and the production environment strongly affect composition [31,32]. For example, in forage sorghum, high protein content is important for forage quality, but lower protein content is more desirable for biomass sorghum, as nitrogen is of little value in the biomass; it is more valuable when it is returned and retained in the field [33]. Since bioenergy sorghum is grown to produce carbohydrates, the composition of both non-structural (sugar and starch) and structural carbohydrates (cellulose, hemicelluloses and lignin) is important. In biomass sorghum, structural composition of the biomass is the critical factor while in sweet sorghum, non-structural carbohydrates are of primary importance. In either case, sorghum has a significant range of variation in composition for both non-structural and structural composition [34,35].

For structural carbohydrates, Stefaniak et al. [36] reported a twofold range in variation among sorghum types for lignin, hemicelluloses and cellulose. Dahlberg et al. [37] eval­uated commercially available forage sorghum hybrids for composition and reported that existing hybrids could be used as a biomass source for ethanol production. While some of this variation is dependent on the environment and maturity [35, 38], there is a genetic component as well [35]. In addition to selection for optimum composition, all approaches will select against ash content, as excessive ash is an undesirable trait for biofuel processing. It must be noted that harvest approaches will also influence this trait, and therefore any process that reduces ash uptake is good.

Research examining variation in forage composition and its effects on digestion in animal systems is of relevance to the bioenergy breeder because several fermentation approaches mimic a ruminant digestive system. There is considerable debate as to the net gain of energy using current and proposed lignocellulosic ethanol conversions techniques [39]. However, the consensus is that this strategy of ethanol production from starch has a positive net energy gain utilizing current technologies [40]. There is also evidence that the overall energy balance for lignocellulosic ethanol will be more favorable than starch-based ethanol, if logistical and technical processing hurdles can be solved [41]. Additional improvements are needed to make lignocellulosic energy production more cost effective as compared to fossil fuels or other renewable energy sources [42,43].

The biomass composition of energy sorghum varies depending upon genotype and envi­ronment in which it is grown, but the relative importance of these sources of variation is not well known. In a study containing six sorghum genotypes, ranging from grain to forage and sweet types, biomass yield varied by 82%, indicating the existence of genetic variability for biomass production among sorghum types [44]. In sweet sorghum, variations in composition of traits such as glucans (cellulose) ranging from 24.7 to 38.5%, xylans (hemicellulose) from 8.5 to 13.9% and lignin from 9.3 to 13.0% were reported [45]. Hoff­mann [35] reported variation in biomass composition across 15 genotypes of photoperiod sorghum in five environments, cellulose ranged from 26.9 to 31.8%, xylan from 14.9 to 18.4% and lignin from 8.3 to 18.9%. The environment had a greater effect on composition than genotype per se in the six bioenergy sorghum cultivars [35].

Environmental Benefits

Poplar trees can also be grown to provide environmental benefits. For example, one option for managing sewage sludge, which is an unwanted by-product of the water purification industry, is its application on agricultural land as a soil amendment. Recently, because of the food chain contamination risk, there has been a tendency for banning agricultural sludge application in many countries. Therefore, it seems that non-food, non-feed energy crops could provide an alternative application site for sewage sludge from municipal water treatment plants. A high fertilizer value of sewage sludge was noted by Moffat et al. [25] in studies designed to evaluate the effect of sewage sludge application and wastewater irrigation on biomass production of two poplar genotypes. The three-year experiment showed that irrigation affected biomass yield more than sewage sludge application and that waste application at the rates used did not pose any risk for nutrient pollution of groundwater.

The special importance of riparian forest or stream buffer zones is understandable and, therefore, establishing buffer zones in forest or agricultural space is treated as a stan­dard environmentally friendly practice in many countries [26]. Growing poplar in these systems is, therefore, a logical option for combining biofuel production with surface and groundwater protection. Furthermore, this would also increase biodiversity near water courses and their banks. Poplars, because of their physiological properties, are very well suited to have an important role in establishing riparian buffer zones. Henri and John­son [26] suggested that social debate is needed to determine if riparian zones should be left as a “no touch” area or should be managed. They also evaluated options for man­aging such buffers and found that harvesting 50% of the area and selling biomass could provide both economic and environmental benefits. Fortier et al. [27] studied a multi­functional system of hybrid poplar riparian buffer in southern Quebec, Canada, and also found effective environmental and economic aspects. They stated that biomass produced in riparian buffers can be harvested for different purposes, especially with a multiclonal structure where some clones could be harvested for energy and some for pulp. When biomass productivity in buffers is considered, it is possible to achieve yields comparable to SRP poplar plantations and, since mineral nitrogen is often a limiting factor, the poplars also provide a very effective way to control nutrient flow to groundwater and surface water resources.

Agro-ecological zones have been used for global, national and regional evaluation of agricultural practices [28]. Recently this methodology was enhanced with digital geographic databases. This advanced technology was used to evaluate agricultural areas in Eastern Europe as well as North and Central Asia for their suitability to produce dedicated energy crops. A large variation in the potential for biofuel production was found among these countries, with the highest potential for poplar production being in the Czech Republic and Georgia, due to good soil conditions and a favorable climate. European energy use was estimated at 111 GJ per capita, with Latvia, Lithuania, Hungary and Estonia having the potential to produce more than 140 GJ per capita of bioenergy. The studies also identified some technical and non-technical barriers for bioenergy utilization, thus emphasizing the necessity for future research programs.

The economic soundness of poplar plantations for energy was also evaluated by Yemshanov and McKenny [29]. They constructed two scenarios: (1) “business-as-usual,” where only the biomass has value; and (2) a “fibre + carbon” scenario, where benefits from sequestering carbon in silvicultural systems are included. Many factors were considered, with transportation costs appearing to play a very important role. When burnt for energy, the cost for 1 GJ from biomass ranged from $4 to $5 for scenario 1 and started at $3 for scenario 2. Obviously, adding the benefits of carbon sequestration helped but, as the anal­yses show, biomass cost was still higher than the price of low-quality coal currently being used by power plants. Assuming the option of producing bioethanol from poplar biomass becomes feasible, the economics of biomass production will be substantially improved.

Several authors point out that the most important environmental effect of SRP poplar is the perennial nature, which promotes increased diversity and frequency of many soil organisms and the beneficial impact on soil organic matter [30]. The use of SRP poplar as a vegetative filter was also studied by Coyle et al. [31], who concluded that coppicing poplar was suitable for this purpose because of the extensive root system and high evapotranspiration rate. Poplar clones in their study were irrigated with leachate from municipal landfill and compared with control treatments receiving mineral fertilizers. Effects on soil meso and microfauna were also compared. They reported that microfauna (i. e., soil nematodes) as well as mesofauna (mainly insects) were more abundant in control treatments, while with the leachate, biodiversity among soil organisms was much higher. Based on these findings, the authors concluded that introducing phytoremediation technologies did not always lead to higher sustainability within the soil environment.

Studies on growth, biomass distribution and nutrient use by eight poplar (Populus bal — samifera L., P. trichocarpa Hook) and hybrid poplar (P. trichocarpa Hook x P. deltoides Bartr.) clones in Sweden were conducted by Karacic and Weih [32]. The clones were cho­sen from Canada because its latitude is similar to that of Sweden. The objective was to evaluate genotype by environment interactions with a special focus on phytoremediation. All studied clones showed a high and positive response to irrigation. The results helped identify clones that were better suited for phytoremediation, which involves the application of as much water and nutrients as possible with minimum leaching from the system.

In California, U. S.A., irrigation water can have bad quality because of high selenium, boron, and/or sodium chloride concentrations. Research being conducted by Banuelos et al. [33] is, therefore, focused on identifying plants that are resistant to elevated levels of these contaminants. Trees have an advantage over vegetative plants because they transpire large amounts of water, produce high amounts of biomass, live longer, have deeper roots, and, for many species, can re-grow after being cut. Poplar is one species that has all of these features and, therefore, this genus is widely used for phytoremediation. However, because of the wide genetic variation among species, hybrids and clones of this genus, screening experiments focused on the tolerance of the various genotypes are essential. Among the findings of this research were differences in the chloride and boron concentrations of both lower and upper leaves in poplar genotypes classified as susceptible or resistant to high concentrations of these micronutrients. The mechanism of resistance to high salt concentrations in the irrigation water was also identified as being early abscission of lower leaves containing a high concentration of chloride. Although the physiology of boron tolerance or toxicity remains to be determined, it appears that boron uptake is inhibited when irrigation waters contain elevated chloride concentrations although other resistance mechanisms may exist within the Populus genus.

Poplar grown in SRPs was also able to effectively degraded ethylene glycol, which is present in the environment because of its use as a coolant and deicing agent. Two mechanisms for removal of ethylene glycol (microbial degradation in the rhizosphere and uptake by the trees through evapotranspiration) have been identified [34, 35]. Based on these results, it is very probable that similar mechanisms can be effective for removal of other organic compounds. This was verified by Jordahl et al. [36], who reported that hydrocarbon degrading microorganisms were more common in the rhizosphere of poplar trees than in bulk soil.

Growth and survival of poplar clones at sites contaminated with hydrocarbons and at sites polluted by long-lasting industrial activity near Lake Michigan were investigated by Zalesny et al. [37]. In some spots, the pollution level exceeded 1% hydrocarbons per kilogram of soil. The average poplar survival rate was 67%, with the variation ranging from 56 to 100% and losses being higher for 60-cm cuttings than for 20-cm cuttings. The growth rate was the highest for commercial clones bred for SRP energy production.

To minimize bioaccumulation of toxic trace elements, Wang and Jia [38] proposed growing poplar or larch on contaminated soils. The reason for selecting these tree species for phytoremediation was the fact that deep roots are able to create microenvironments in the soil where immobilization or uptake of the metals can occur. The growth of two tree species in soil spiked with a mixture of cadmium, copper and zinc was investigated by Wang and Jia [38], who found that poplar could remove 56.2 g ha-1 of cadmium, 196 g ha-1 of copper and 1170 g ha-1 of zinc. Heavy metal transferring capacity from roots to aboveground organs was higher in poplar than larch, leading the authors to propose growing poplar on contaminated soils.

Poplar cannot be considered as a cadmium hyperaccumulator because it is able to take up only 10 mgCd kg-1, whereas the known hyperaccumulator Thlaspi caerulescens can accumulate 100 mg kg-1. However, because of the high biomass production in poplar plantations the total accumulation of cadmium is considerably higher per hectare and can actually reach 1000gCdha-1 for poplar compared to just 250 gCd ha-1 for T. caerulescens. Pietrini et al. [39] reported the results of studies on cadmium phytoremediaton potential of several poplar clones. They found high genetic variation among the 15 Italian clones that were studied. The most promising clones showed three desired strategies that could positively affect phytoremediation. Firstly, a relatively high cadmium accumulation level in wood parts; secondly, high leaf tolerance when measured as photosynthetic activity; and, thirdly, a very fast juvenile phase growth rate. The authors concluded that the best indicators of suitability of given poplar genotype for phytoremediation would be some chlorophyll fluorescence parameter.

Finally, a Life Cycle Assessment (LCA) approach was used to quantify the environmen­tal impact of Italian poplar plantations [40]. Two types of short-rotation plantations (a 1- to 2-year cutting frequency and a medium cutting frequency of >5 years) were distinguished. All energy inputs and outputs were taken into account as well as other environmentally important aspects (acidification potential, eutrophication potential, global warming poten­tial, ozone layer depletion, human toxicity potential, ecotoxicty potential, photochemical oxidation formation potential) in a life span of poplar plantations grown for energy purposes, from field preparation (in the first year) to field recovery (in 25th year). It was concluded that from the environmental aspect the best solution is to replace industrial fertilizers with cattle manure; this can reduce total energy use by 19.8%. The authors also concluded that future environmental soundness can be improved the by breeding of high-yielding clones of different poplar hybrids.

Disease, Pest Control

The exotic status of Miscanthus and its current small area of cultivation is an advantage in terms of the number of pests and diseases found in native areas that might threaten its production [95,96]. However, as the area in Miscanthus cultivation increases new pest and disease threats are likely to emerge.

Fusarium has been implicated in Miscanthus crop failure but its significance has not yet been quantified [97].

The common rustic moth (Mesapamea secalis) and the ghost moth larvae (Hepialus humuli) have also caused production problems for Miscanthus crops [76]. In addition, the larvae of armyworms (Spodoptera frugiperda) have been shown to infest plots of Miscant­hus x giganteus [98] along with the aphid Rhopalosiphum maidis [99]. In the United States, plant-parasitic nematodes (Helicotylenchus, Xiphinema, Paratylenchus, Hoplo — laimus, Tylenchorhynchus, Criconemella, Longidorus, Heterodera, Paratrichodorus, Hemi — criconemoides, Pratylenchus) have been identified as potential pathogens in Miscanthus biofuel crops [100].

Many other insects such as Thysanoptera, Hexapoda, arthropods, Tetranychus, Tetrany — chidae, Prostigmata, mites, Acari, Arachnida have been identified in Miscanthus [101] but their prevalence is far less than insect pests of sugarcane [102]. In the United Kingdom, Clifton-Brown et al. [3] reported the presence of cereal leaf aphid Rhopalosiphum maidis, although this pest appears to be more of problem in greenhouses than the field. Bradshaw et al. [99] suggested aphids (Rhopalosiphum maidis, Siphaflava, Spodoptera frugiperda)

image013

Figure 4.6 Cartography of world diseases (orange) and pests (purple) of Miscanthus. Information collected from Thinggard [97], Gottwald and Adam [101], DEFRA [76], Clifton-Brown et al. [76], Prasifka et al. [98], Bradshaw etal. [99], Mekete etal. [100].

have the potential to damage young Miscanthus. Other cereal aphids and aphid-transmitted viruses could also be potential problems in Miscanthus. Finally, in comparison to sugar­cane, where more than 1500 insect species have been identified, it appears that few insects have been reported to feed on Miscanthus x giganteus [98].

In summary, although few pests and diseases of Miscanthus have been identified in the cultivation area (Figure 4.6), the species appears robust with a high tolerance to pests and diseases. This will hopefully reduce the requirement for chemical pest control [103] although the genetic basis for this apparent high pest tolerance still needs to be verified with the likely increase of the cultivation area.

Wheat Straw

Cereal grains (wheat, barley, oats, sorghum and rice) are widely grown in the United States and wheat straw constituted 20-25% of potential 2012 U. S. biofuel feedstocks (Table 8.1). Agronomic considerations for determining supplies of wheat straw that can be harvested sustainably include: (1) annual wheat straw yield and its stability; (2) straw harvesting efficiencies; (3) crop rotation and tillage practices for assessing soil conservation and sustainability factors; (4) nutrient removal and fertilizer replacement values; (5) site- specific field evaluations including economic factors that inform decision support systems; and (6) competing economic uses for harvested cereal straw. Addressing these issues has been the focus of several recent research efforts including the Sun Grant partnership [32, 33], the U. S. Pacific Northwest, the Climate Friendly Farming™ project [34], and the USDA Solutions To Economic and Environmental Problems (STEEP) grant program [35].

In the United States, the amount of wheat straw potentially available for use as a biofuel feedstock was assessed through the Sun Grant partnership where the team used USDA — NASS county level grain yield data from 1999 through 2008 [32]. Grain yield data were combined with the harvest index (HI), the ratio of grain yield to total aboveground biomass (grain plus straw) at harvest, to estimate straw yields. The HI of wheat, however, is not a constant value [32], with reported values ranging from 0.20 to 0.70 with an average across locations and years of 0.44. This average is greater than the historic HI value of 0.375 commonly used for winter wheat [19], presumably because newer grain varieties are more efficient and produce less straw per unit of grain than older varieties. The HI data have important implications for estimating the amount of straw produced based on grain yield because an increase in HI from 0.375 to 0.44 results in a 24% reduction in estimated wheat straw yield. Consequently, generating straw yield maps for the United States based on grain yield can only be considered as a first step toward evaluating straw feedstocks for the purpose of siting biofuel plants. In addition to overall production, understanding the year-to-year stability of straw yield is also an important consideration for assessing feedstock supplies. Karow [32] noted that significant annual fluctuations in wheat straw stocks could occur where some areas with high average straw yields also had years with no or limited wheat straw yield.

Overall straw yield serves as a starting point for quantifying available biofuel feedstock that can be sustainably harvested. Factors such as straw harvesting efficiencies, residues (straw) required for controlling wind and water erosion, and for maintaining soil produc­tivity then reduce the amount of straw that can be harvested without impairing the soil resource base. Current straw harvesting efficiencies (e. g. straw baling) are near 50% [7]; however, technological advances could increase residue harvesting efficiencies to around 75% [36]. It is more difficult to assess the multitude of crop rotation and soil tillage factors that influence how harvesting crop residues will affect soil conservation and other agroe­cosystem services. In many cases, conservation needs that depend on leaving adequate cereal residues in the field will be more limiting than current harvesting efficiencies.

In developing estimates for straw feedstocks that could be sustainably harvested, Kerstet — ter and Lyons [37] estimated that dry straw inputs of 3.4-5.6 Mg ha-1 yr-1 are required for conservation purposes in the western United States, whereas others [38] reported 4.5 Mg residue ha-1 yr-1 were needed. These numbers are similar to the 4-5 Mg residue ha-1 yr-1 reported [39] to be required for maintaining soil organic matter in dryland cropping sys­tems near Pendleton, OR. Assuming a harvest index of 0.4, wheat grain yields of 2.0-3.3 Mg ha-1 yr-1 (3.0-5.0 Mg ha-1 yr-1 of wheat straw) would be needed to supply straw for conservation needs and harvestable straw estimates would need to be based on grain yields that exceed this threshold. An important point to realize in these calculations is that the quantities of residue required for conservation needs are on an annual basis. In many dryland scenarios, however, continuous wheat is seldom grown and crop rotations often include a fallow year when no crop or crop residues are produced [4]), or where other crops such as peas (Pisum sativum) or lentils (Lens culinaris) that produce far less residue than wheat are grown [14]. Thus, crop residue production must be quantified for an entire rota­tion in order to assess the average annual residue returns on a rotational basis. Therefore, in a two-year, wheat-fallow rotation, wheat will need to produce grain yields of 4.0-6.6

Mg ha-1, twice that reported [37,38] to meet conservation needs. Unfortunately, many estimates of wheat straw availability have assumed continuous wheat [37,38]) production when assessing conservation needs. This has resulted in “sustainable harvest estimates” for wheat straw that are greatly inflated when compared to the actual amount available with other rotations. Accurate estimates of the wheat residue quantities returned to soil are in themselves insufficient to assess sustainable residue harvest, due to the important influence of other key factors such as crop rotation and tillage practice.

Evaluating the impact of straw harvest on important soil quality indicators such as SOC, aggregation, or erosion requires long-term research, since annual changes are generally very small and can be temporally dynamic. In recognition of this need, the Sun Grant partnership organized a symposium at the 2009 International American Society of Agronomy (ASA) meetings entitled “Residue Removal and Soil Quality — Findings from Long-Term Research Plots.” Presentations at this symposium examined residue removal impacts in the context of various management practices including crop rotation, tillage, applied fertilizer and irrigation. The articles developed from this symposium were subsequently published in the Agronomy Journal (Huggins et al. [33]). The series includes results from long-term studies in Europe, Canada, Australia, and the United States. Key points included an assessment [40] that reviewed long-term studies from Europe, Australia, and Canada and cautioned against annual removal of straw because of the potential decrease in SOC. Due to the site-specific nature of residue harvest, they recommended that straw removal studies be coupled to areas where residue harvest is actually being considered and to not extrapolate using data from other areas.

Near Pendleton, OR [41], it was concluded from long-term dryland cropping system studies that residue removal in this predominantly wheat-fallow area will increase SOC depletion and that residue harvest will only be sustainable if wheat-fallow was replaced with continuous cropping and no-tillage. Nafziger and Dunker [42] reported on the long­term SOC trends under different crop rotation and fertilizer treatments at the University of Illinois Morrow Plots and emphasized the importance of adequate nutrient levels for maintaining SOC. Long-term plots at the University of Missouri Sanborn Field showed that the amount of field residues returned was positively related to SOC (Miles and Brown, 2011 [43]). Gollany et al. (2011) [44] evaluated five long-term field experiments in North America with the CQESTR model and concluded that increasing soil carbon inputs through manure additions and/or crop intensification as well as reducing tillage were important strategies for mitigating residue harvest impacts on SOC. Finally, in irrigated systems, Tarkalson et al. (2011) [30] reported that SOC either increased or remained constant when wheat residues were removed and hypothesized that belowground biomass production was important for maintaining or increasing SOC under irrigation. They also pointed out that irrigated cropping systems in the Pacific Northwest and elsewhere tend to be diversified with crops such as alfalfa (Medicago sativa), potato (Solan spp.), and sugarbeet (Beta vulgaris) in addition to wheat and corn, and that very little data on residue removal effects on SOC is available for those situations.

In combination, these papers conclude that under dryland or rainfed conditions, residue harvest will negatively impact soil organic matter and associated soil properties; how­ever, harvest effects will be situation-dependent. Consequently, assessing residue harvest must be placed in a farming systems context that includes an evaluation of economic and environmental trade-offs specific for a given farm and location. Future challenges include the development of science-based, site-specific decision aids that enable growers to make economically sound and environmentally sustainable choices regarding residue harvest.

In 2009, USDA-ARS and land grant scientists in the Pacific Northwest established long­term field studies from a combination of current and new field locations to assess economic impacts of residue removal as well as effects on soil properties, soil-borne disease and crop performance [35]. Specific objectives of the project funded through theUSDA Solutions To Economic and Environmental Problems (STEEP) grant program are to: (1) establish or use existing long-term field sites and assess impacts of wheat residue removal by mechanical harvest and burning on economic returns and subsequent crop performance; (2) assess environmental impacts (soil carbon sequestration, nutrient cycling, soil erosion) of residue removal by mechanical harvest and burning on established sites; and (3) develop field — scale and regional assessments of economic and environmental trade-offs associated with harvesting or burning crop residues.

Preliminary STEEP research from the Washington State University (WSU) Cook Agron­omy Farm (CAF) estimated that the potential site-specific (37-ha field) lignocellulosic ethanol production from winter wheat residues would range from 813 to 1767 l ha-1 and average 1356 l ha-1; thus, indicating that targeted harvesting of crop residues would be an important consideration. Harvesting only winter wheat residues, in a three-year rotation with spring wheat and spring peas (Pisum sativum), reduced residual carbon inputs to levels below that required to maintain SOC under conventional tillage practices. This occurred as a function of both residue removal and inclusion of the low residue producing spring pea crop in rotation with wheat. Harvesting winter wheat residues under conventional tillage resulted in negative Soil Conditioning Indices (SCI) throughout the field. In contrast, SCIs under no-till were positive despite residue harvesting. Increased nutrient removal is also a consideration associated with harvesting crop residues for any use. In the STEEP study, the estimated value of N, P, K, and sulfur (S) removed in harvested wheat residue was $13.71 Mg-1. In high residue producing areas of the field, the estimated value of harvested residue in fertilizer replacement dollars exceeded $25 ha-1. Based on the potential SOC impact and increased nutrient cost, we concluded that substantial trade-offs exist in har­vesting wheat straw for biofuel and that trade-offs should be evaluated on a site-specific basis. Furthermore, support practices such as crop rotation, reduced tillage and site-specific nutrient management need to be considered if residue harvest is to be a sustainable option (Huggins and Kruger, 2010 [14]).

Potential impacts of crop residue removal on SOC were also simulated for different tillage and rotation scenarios in the Pacific Northwest using the CropSyst model [45]. Preliminary outcomes show that harvesting winter wheat residue at the lowest simulated removal rate (50%) resulted in SOC losses over a 30-year simulation (Figure 8.7). Harvesting less than 50% of the residue was not considered to be practical or a cost-effective use of producer time and equipment. Use of continuous no-till practices, however, partially compensated for the effects of winter wheat residue removal on SOC.

From an economic perspective dryland wheat growers typically receive from $3 to $5 Mg-1 in the Pacific Northwest, from custom operators who harvest the majority of the straw that is exported from this region. Traditionally, the primary motive for growers to sell residue is to reduce post-harvest tillage operations, thus reducing their total operating costs in high-yielding areas by $35-60 ha-1 depending on tillage practices. However, growers have expressed concerns over long-term impacts of continual straw removal. Once the field

image040

— NT, residue retained — NT, residue removed

Figure 8.7 Thirty-year simulated changes in soil organic carbon under a three-year crop rotation of winter wheat, spring wheat, spring pea using the CropSyst model. Simulations consist of conventional tillage (CT) and no-tillage (NT) and residue retained (no harvesting) and residue removed where 50% of the winter wheat residue is harvested (removed) and all other residue (spring wheat and pea) retained.

studies and model simulations are more complete, we will estimate long-term economic impacts using partial enterprise budgets including nutrient replacement costs over time.

Sun Grant researchers are also evaluating existing straw markets to identify areas of potential residue harvest [32]. Existing markets for straw can be useful for identifying where straw is readily and reliably available. Identifying these potential markets is also important because they may significantly influence straw prices in a future biofuel market. With this background, the next steps in the DOE Sun Grant project are to identify those areas in the United States where sustained residue harvest seems feasible and to characterize those areas by determining: (1) What makes residue harvest possible in these areas? (2) Are these conditions likely to continue in the future? (3) If the area is irrigated, is the water source stable and will electricity costs affect production? (4) Are alternative markets already in place for harvested residues and, if so, at what cost would residues need to be purchased for biofuel use to be competitive? These and other questions need to be addressed as we think about residue harvest for biofuel use and the design of needed research and decision support systems for a residue-based biofuel system [33]).

Typical Biomass Logistics Constraints

12.3.1 Resource Constraints

In this chapter, reference is made to production of the biomass (growing the plants) only in the context of (1) when during the year the biomass can be harvested, and (2) the properties at harvest, principally the moisture content. Any biomass crop that provides an extended harvest season has an advantage — remember, any business (not only bioenergy) wants to operate as many hours per year as possible. Herbaceous crops cannot be harvested during the growing season, thus storage is always a component in the logistics system design.

Two examples from opposite ends of the harvest-season spectrum are offered. In the Southeast, wood is harvested year-round. It is said to be “stored on the stump”. Weather conditions in the Southeast are such that few harvest days are lost in the winter due to ice and snow. At the other end of the spectrum, consider the harvest of corn stover in the Upper Midwest. The grain harvest is completed in the fall and then the stover is collected. In some years, there are less than 15 days between the completion of the grain harvest and the time when the fields are covered with snow and no stover can be collected. All feedstock required for year-round operation of a bioenergy plant using only corn stover must be harvested in a two — to five-week period — a significant challenge for the logistics system.

Disease and Pest Management

A number of diseases and insects have been reported for switchgrass and some concerns have been raised regarding large scale planting for feedstock production in the future [54]. However, no diseases or insects have demonstrated economic concerns to date. The list of diseases reported in the literature includes: rust associated with Puccinia spp., anthracnose caused by Collectotrichum spp., smut caused by Tilletia maclaganii, sharp eye spot caused by Phyzotonia cerealis, helminthosporium spot blotch caused by Bipolaris sorokiniana, and viral disease caused by Panicum mosaic virus (PMV), Phoma leaf spot (Phoma spp.), and Fursarium root rot, Fusarium spp. [55-64]. Most of these diseases are reported from a few field observations, with some cultivars being more susceptible to specific diseases. Individual genotypes can have susceptibility to diseases, but released cultivars and germplasms have been selected for a range of resistance to many diseases [16]. Sanderson [56] indicated that higher anthracnose infection was observed in Trailblazer than in Cave-in-Rock. Cave-In-Rock is the cultivar most susceptible to smut [16]. Smut infection can significantly reduce switchgrass biomass and seed production. In Iowa, a smut-infected seed field did not produce seed for several years [60,65]. Thomsen et al. [66] reported that smut infection reduced Cave-in-Rock switchgrass biomass yield by as much as 40%. Consequently, smut seems to be the most serious disease at the present.

Few insects have been reported in switchgrass and, at present, generally appear to pose a limited threat. Grasshoppers (Orthoptera) are common herbage feeding insects that could affect switchgrass biomass productivity [16]. Recently, two other insects have been identified in switchgrass fields and natural populations in the US Midwest. Prasifka et al. [67] identified a stem-boring caterpillar (Blastobasis repartella Dietz) and its distribution and symptoms. Infestation of B. repartella can cause death of young tillers of switchgrass but its damage on biomass yield was not quantified. Reducing seed production is the primary concern with insects in switchgrass. Boe and Gagne [68] discovered a new species of gall midge [Chilophaga virgate Gagne (Diptera: Cecidomyiidae)]) in South Dakota. Infestation of the gall midge was observed in the peduncle inside the sheath of the flag leaf and the inflorescence never emerged. Depending on infestation rate, switchgrass seed production could be reduced by the gall midge. For example, the bluestem seed midge (Contarinia wattsi Gagne) was reported to reduce big bluestem (Andropogon gerardii Vitman) seed production by about 40% [69] and has been observed by the authors in heavy infestations in switchgrass.

Hybrid System

The hybrid system (or FlexStand™ System) is a way of growing pines for multiple markets. One such planting configuration involves a row of sawtimber crop trees next to a row of pines intended for bioenergy. For example, alternating rows of sawtimber crop trees (harvested after 24 years) may be planted between adjacent bioenergy rows that are harvested after 14 years (Table 10.4). In some cases, all seedlings in the plantation may be from the same genetic source while in other cases the genetics will vary by row [10]. For example, each row of sawtimber trees may be of one genotype (grown in containers) while alternating rows intended for bioenergy may be planted using less expensive bareroot stock (perhaps injecting them with paraquat two years before harvest [5]). Spacing for the biomass trees might be 2 m apart within the row while sawtimber trees might be spaced 3 m apart within the row. At some early age (perhaps 11-14 years) all of the biomass rows are harvested. On some sites this thinning might remove 100 green tonnes per ha of biofuel. After the biomass thinning, the remaining pines might be on a 10 x 3 m spacing.

Establishment

Unlike other bioenergy crops, stand establishment is typically not a problem and costs associated with planting sorghum are significantly lower than for other crops. Sorghum stand establishment requires a well-prepared seedbed and adequate moisture to initiate the germination process. Therefore, the planting and stand establishment process in energy sorghums is similar if not identical to that for grain sorghum [48].

Assuming moisture availability, the primary factor in sorghum stand establishment is temperature. Sorghum is a warm season crop that requires soil temperatures of at least 60°F (16°C) to initiate the germination process; temperatures below 60°F (16°C) will slow or even stop the process. In most regions where bioenergy sorghum will be grown, cool soil temperatures will not be a problem but in more temperate climates it must be monitored. There are efforts to improve the tolerance and growth of sorghum under cool temperatures [49,50].

From a productivity standpoint, plant population and row spacing are probably the most critical management factors, with optimum density and spacing depending on the type of production system. For example, sweet sorghum processors prefer thick stalks that mimic sugarcane; however, higher yield is typically associated with higher plant densities [51]. Furthermore, row spacing modifications are limited to those that fit within existing harvest­ing equipment. Optimizing plant population and distribution to fit production programs is of significant importance [52].

Harvest Management (Cutting Height, Season, Frequency)

Optimum cutting cycle and plantation design were the focus for studies with three fast­growing clones at three locations in the United Kingdom. Populus trichocarpa was evaluated at two spacings (1.0 x 1.0 m and 2.0 x 2.0 m) and two — or four-year cutting cycles [51]. Annual yield of biomass was always higher in the longer cutting cycle and the 1 m2 spacing generally had a higher biomass yield than the 4 m2 option. The authors pointed out that all poplar cones gave higher yield at the site with the highest annual rainfall. They also suggested that the reason for better yields with the longer cutting cycle was a proper balance between root system and aboveground organ development. The authors also noted that a four-year cutting cycle is more economical due to lower harvest cost per unit dry matter.

di Nasso et al. [52] pointed out that plant spacing and cutting cycles are the most crucial factors for successful establishment and biomass production by short-rotation poplar. Their report summarizes results of long-term studies (12 years) designed to identify the most important production indices in relation to different cutting cycles (from annual to triennial). They found that the shortest cutting cycles resulted in increased stool mortality, making the shortest cutting cycle less efficient than the other cycles studied. The highest efficiency in terms of energy output was noted for a triennial cutting cycle. The authors stated that the energy balance was positive for all studied cutting cycles and that for short-rotation plantations good soil fertility plus low rates of fertilizer and pesticide application were important for making short-rotation poplar plantations a perfect example of sustainability in twenty-first century agriculture.

Fang et al. [53] also tested four planting densities and three poplar clones at three cutting frequencies. Each of the experimental factors significantly affected obtained biomass yield, with the highest annual production being obtained with a six-year cutting cycle. They concluded that, for China, a longer cutting cycle should be recommended because regardless of plant density biomass yield increased as cutting cycle length increased (i. e. from 10 to 13 Mg ODM ha-1 yr-1 when going from a four to six-year cutting cycle).

Guidi et al. [54] quantified the relationship between chemical composition of biomass obtained from SRP poplar and cutting frequency of plantations in order to answer the crucial question of “how to manage the plantation to achieve good quality of biomass for biochemical conversion into liquid biofuels.” They concluded that different cutting cycles did influence the biochemical conversion rate of the poplar biomass, with the highest ethanol yield being associated with a four-year cutting cycle. This occurred because, at that age, the relative content of cellulose was much higher than in poplar biomass obtained from two-year cutting cycles, when the hemicellulose content was higher, or from six-year cycles, when the lignin content was greater because of the additional two years of growth.