(Created page with " ==Abstract== Fusion energy, based on the use of broadly available inexhaustible resources as lithium and deuterium and with minimal impact to the environment, aims at a cha...") |
m (Scipediacontent moved page Draft Content 791412967 to Sanchez 2014a) |
(No difference)
|
Fusion energy, based on the use of broadly available inexhaustible resources as lithium and deuterium and with minimal impact to the environment, aims at a change in the energy supply paradigm: instead of its current dependence on natural resources and environmental impact, energy would become a technology-dependent resource with unlimited adaptive availability and whose unit cost should decrease as technology progresses. This article intends to give a picture of where fusion research stands today and the perspectives: the achievements, the difficulties, the current status, marked by the construction of the ITER experiment which will demonstrate the scientific feasibility of fusion power, and the perspectives toward the first demonstration power plant, DEMO, which, according to the European Roadmap, could start the construction shortly after the full power experiments in ITER (<2030) and be in full operation, generating net electricity into the grid, by 2050.
It is widely recognized that the energy supply is one of the largest challenges that mankind will be facing during this century. Population growth and increasing per capita consumption of goods and services in the emerging countries will lead to a likely twofold energy demand in a couple of decades, despite the efforts toward efficiency and energy savings in the developed countries. In addition, new demands will appear derived from the need for massive water supply and food production or large-scale recycling of basic materials.
In this scenario, we will need to count on massive sources of energy, environmentally friendly, and based on abundant primary resources. Nuclear fusion intends to be one of these sources, its main objective being to transform the energy paradigm: from todays dependence on natural resources and environmental impact into a technology-dependent resource, in the same way as we see today Internet access, mobile communications, or computer power: a resource whose availability can grow easily with demand and whose cost per unit decreases as technology progresses.
Nuclear fission, the basic process in todays nuclear power plants, consists on breaking a large nucleus into medium size ones, nuclear fusion is based on the opposite reaction: the union of two small nuclei in order to generate a larger one, but still small. In both cases, the mass of the reaction products is slightly smaller than that of the original nuclei and this lost mass is converted into energy according to Einsteins equation E = mc2. The nuclear forces involved in the process are much larger than the electromagnetic forces which are the basis of standard combustion of fuel and so is the capability of energy production per unit mass of fuel: one gram of fusion fuel, equivalent to about 7 tons of oil [1] would be enough to provide the full energy consumption of an average person during more than 1 year.
Fusion is the reaction which powers the sun and all the stars. In the center of a star, the incredible high gravitational forces generate the conditions for the fusion of hydrogen nuclei into deuterium as the first of a chain of reactions in which deuterium will fuse with remaining hydrogen into helium (He3) and later the helium with itself (to generate He4). On Earth, where we cannot count on such strong gravitational forces, we will need to look for more accessible reactions, though still very hard to achieve. The fusion reaction with larger cross section under reasonably achievable conditions is the fusion of deuterium and tritium, two isotopes of hydrogen. The reaction produces as a result a helium nucleus and a neutron and releases 17.6 MeV (mega electron volt) of energy, (91,000 kWh per gram of fuel). Of this energy, 4/5 is carried by the neutron and the remaining 1/5 by the helium nucleus.
A fusion power plant would be essentially a thermal plant. The energy released by the fusion reaction is absorbed by a coolant and extracted to the heat exchangers and to the electricity-producing turbines. The fusion fuel would be composed of two species: deuterium and tritium. Deuterium exists in natural water in a fraction of 33 mg/L. On the other hand, tritium, another, heavier, hydrogen isotope, is unstable and does not exist in nature. It is usually a secondary product of fission power plants. Fusion reactors would generate in situ the tritium they would consume by means of neutron bombardment of lithium, another chemical element. Lithium is also very abundant in nature and, given the fact that the required quantities are very small in comparison with the amount of energy obtained, it could be extracted at affordable cost from salts solved in seawater. The estimated reserves of lithium in seawater would be sufficient to satisfy the worlds energy needs during many million years and it is expected that in the future technologies for mastering the deuterium–deuterium reaction would become available, thus extending the availability of fusion fuel beyond the expected life of the solar system.
The main exhaust product resulting from the reaction is helium, the very same gas we use to fill balloons for children. This element is harmless for people and the environment, it does not contribute to the greenhouse effect and, in fact it does not even accumulate in the atmosphere: due to its low weight it escapes to the space. In addition, the quantities produced would be very small: if all the energy in the world would come from fusion, the amount of helium produced worldwide would be in the order of several thousand tons a year, to be compared with the ten billion ton CO2 per year released currently.
Safety is a major concern on every industrial facility and in nuclear plants in particular. One of the advantages of a nuclear fusion plant would be its intrinsic safety: fusion plants will be safe not just because they will be carefully designed and operated, they will be safe because the physical properties of the process make impossible an uncontrolled fusion reaction. As we will discuss later, the very high temperatures required in the reactor, in the order of several hundred million degrees, are impossible to sustain in case any malfunction arises, for example, an air leak into the reactor would immediately bring down the temperature and extinguish the fusion reaction. Another element to be taken into account is that, whereas in a fission reactor the amount of fuel inside the reactor could sustain the reaction for months (and therefore it might be more difficult to manage if control is lost), in a fusion plant the fuel contained inside the reactor would last for only a few seconds if the supply from outside is interrupted. As an example: the cooling system, whose failure was the cause of the problems at the Fukushima reactor, was not even an important safety component in the large experiment ITER because there was no significant residual heat when the operation stops.
The main safety concern in a fusion plant would be the existence of tritium, which is radioactive and, even if it is not a long-term pollutant (its half life is 12.3 years), it is dangerous if inhaled or ingested as tritiated water. Fortunately, tritium is only used as a transition element and the main supply comes as lithium, but still the storage of several kilograms of tritium is difficult to avoid and, as it would happen with any other dangerous substance in an industrial facility, safety measures are required to prevent any release to the environment. The current designs would guarantee that in the worst case accident, there would be no need to evacuate people staying outside the facility fence.
The main drawback of fusion as a potential source of energy is the difficulty to generate and sustain the reaction. In order to achieve the reaction, the two colliding nuclei must get close enough to allow the short range nuclear forces to act, this can only be achieved if their energy is high enough to overcome the electrostatic repulsion of the two positively charged nuclei. Accelerating deuterium or tritium ions to these energies, 15–20 keV, is not particularly difficult, the difficulty arises when we try to get energy gain from the process: launching an ion beam against a target at the required energy would produce just a very small fraction of fusion reactions because in most cases the long range coulombian repulsion will deviate the ions, which will miss the target, and also many of them will release their energy in collisions with the electrons. The only way to achieve an efficient process is to be able to confine the accelerated ions in a closed space, in such a way that, after having gained the required energy, they have many opportunities to collide before their energy is lost. The main problem is the availability of a suitable recipient: a gas where the average particle energy is 15 keV has a temperature of 170 million degree.
Since 1950s, scientists have been trying to find this kind of recipient and have developed two main families of experiments: inertial confinement, based on a fast heating of the fuel so that it enters the fusion reaction before it has time to expand, and magnetic confinement, based on the fact that at such high temperatures the gas, in state of “plasma”, is composed of charged particles, which can be confined by magnetic fields.
Inertial fusion uses an ion beam or a laser, the preferred option nowadays, as the means of delivering a big amount of energy in a very short time to the deuterium–tritium (DT) target. The target is illuminated with spherical symmetry and this produces a pressure wave which converges toward its center. At a given moment, a fusion “spark” should be generated in the center and the heat generated by these initial fusion reactions would propagate back the fuel burn toward the rest of the target. The most advanced inertial fusion experiment is currently the National Ignition Facility (NIF) located in the Lawrence Livermore National Laboratory, (Livermore, CA). Recently, experiments have been reported where the initial spark of fusion has been found [2]. Its extension to the whole target has not yet been achieved and the energy production is still a small fraction of that delivered by the lasers, however it is a promising result. A similar experiment, the “Laser Mégajoule” is under construction in France, essentially with military purposes: inertial fusion experiments can be used to validate the models which are the basis for the computer simulation of thermonuclear explosions.
In parallel, the largest worldwide effort toward fusion energy has been and is being devoted to the so called “magnetic confinement”. The DT fuel at such high energies is on “plasma” state, a gas where ions and electrons move separately, and can, therefore, be confined by a magnetic field, which essentially allows particles to move freely along the field lines but forces them to move in small circles when trying to go in perpendicular direction. The next step is to construct a configuration where field lines close on themselves, so, ideally, particles would stay indefinitely moving along those closed trajectories. These configurations have been implemented since the 1960s along two main families of toroidally shaped devices: “stellarators” and “tokamaks” which differ in the way they generate the necessary complementary field which is required to avoid particle drift in a toroidal geometry. The third approach, a linear configuration with “magnetic mirrors” at both ends, turned to be much less effective and has been less developed.
The “tokamak” – word derived from the Russian expression for “toroidal chamber with magnetic coils” – was first developed by I. Tamm and A. Sakharov (who later received the Nobel peace prize) in the early 1960s and was rapidly adopted by researchers around the world. Thirty years later (1991), the Joint European Torus (JET) – a tokamak experiment owned by the European Commission and located in Culham, near Oxford (UK) – carried out the first D-T experiment toward controlled fusion, providing a substantial amount of energy from the fusion reactions [3], few years later (1996–97), JET and the TFTR tokamak (Princeton, NJ) reached fusion power levels in the order of 10–15 MW, with a ratio of fusion power to heating power of 60% [4, 5].
Despite the criticism to fusion researchers to be “always 40 years away” from the goal of fusion power, the reality is that the efficiency of tokamak experiments, measured as the “triple product” of ion temperature, ion density, end energy confinement time (Ti.ni.τE), was growing at comparable pace to that of microprocessors between 1960 and 2000 and will hopefully recover when the large ITER experiment will start (see Fig. 1). However, it is necessary to realize the magnitude of the challenge: the magnetic field confines very well a single particle, but, as the many particles collide, there is diffusion across field lines and both particle and energy flow slowly away. In order to minimize these losses we have two essential tools: one is to increase the magnetic field, but this has a limit for the superconductor coils which generate it, so it is difficult to envisage a device with an average field above 6–7 Tesla; the second tool is to increase the machine size. “Wind tunnel” comparisons with tokamaks of similar geometry and increasing size have shown that in order to achieve “ignition” conditions, a situation where the energy generated in the fusion reactor can compensate the losses and maintain the required high temperatures which sustain the reaction, we need a hot plasma volume in the order of 1000 m3 for a standard magnetic field value of 5–6 T. This means very large, complex, and expensive devices, with development times in the range of 10–20 years.
|
Figure 1. Progress of the fusion triple product Ti.ni.τE. |
After the success of JET and TFTR, the next step will be ITER, a joint experiment of seven parties which represents more than half the world population: China, India, Japan, Korea, Russia, the United States, and Europe, which acts as a single party and is represented by the European Commission. ITER (from the latin word "iter", the way), with nearly 1000 m3 of hot plasma, will aim at demonstrating the scientific feasibility of fusion as an energy source. The specific objective is to obtain energy gain Q = 10, which means that ITER will generate ~500 MW of fusion power with 50 MW of external power being injected to heat the plasma. The gain Q = 10 would be sustained during 400 sec periods; as a second objective, a less demanding value of Q = 5 would be sustained for periods of 1500 sec (see details in Table 1) In addition, ITER will carry out a number of experiments to test the technology developments necessary for a power plant, in particular, the “breeding blanket” modules which will test the technology for tritium generation from lithium (Fig. 2).
Fusion power | 500 MW |
Fusion power gain (Q) | >10 (for 400 sec inductively driven burn) |
>5 (1500 sec) | |
Plasma major radius (R) | 6.2 m |
Plasma minor radius (a) | 2.0 m |
Plasma vertical elongation | 1.70/1.85 |
Plasma current (Ip) | 15 MA |
Toroidal Field at 6.2 m radius | 5.3 T |
Installed auxiliary heating | 73 MW |
Plasma volume | 830 m3 |
Plasma surface area | 680 m2 |
Plasma cross-section area | 22 m2 |
|
Figure 2. Artist view of the ITER device. |
ITER is a large extrapolation in volume (10 times larger than JET) and also in technology. In addition to the use superconducting coils, cooled at 1.4 K while located less than a meter away from the million degree hot plasma, the largest challenges come from the goal to operate long pulses at full fusion power: all the internal elements need active cooling as well as neutron-resistant functional materials, particularly insulators.
The challenge in science and technology is formidable but it is not the only one: ITER is also to some extent a social experiment. With magnetic fusion research being a declassified activity both by the eastern and western countries during the cold war, the undertaking of a large joint experiment was one of the agreements between presidents Reagan and Gorbachev on their summit of November 1985 at Geneva. The European Union and Japan joined the project immediately and the design evolved slowly during the following years in a process which included the temporary withdrawal of the USA and the decision in 1998 to redesign the device in order to make it more affordable and ended with the delivery of the final design report in 2001. In 2004, the USA returned to the project and China and Korea, as well as later India, expressed their interest to join, which was welcome in order to share the multibillion costs of the project.
The main drive for the interest of the parties was fusion energy as a long-term goal, but, in the shorter term, their interest was also focused in the important technology developments around ITER. This led to an organization based on a moderate size central team, located at the project site in Cadarache (France) and in charge of about 15% of the total ITER budget, and seven, smaller but still strong, teams, named “domestic agencies”, located at the different parties’ headquarters and in charge of delivering in-kind components to the central team for about 85% of the budget. Europe as the host party would provide nearly half of the total budget and the remaining part would be covered by the other six parties.
The organization based on in-kind contributions allowed for an a priori distribution of the participation and could accommodate the wish of the parties to participate in the technologies of their interest, in addition, this was the way to allow for the emerging countries to have lower costs by developing components with their own workforce. On the other hand, the system, based on a central team which prescribes the design but does not have responsibility on the cost of construction of these components and the domestic agencies which have to procure and pay for the components, is prone to produce internal discussions and delays in the decisions.
The ITER agreement was signed 21 years after the idea was launched, in November 2006 and the first estimate of the construction period was 9 years. Soon it became evident that between the report delivered in 2001 and the necessary constructive design there was much more distance than originally estimated. The report had concentrated in the main machine parameters, the related physics and the design of the critical high-technology components, but ITER was a very complex industrial plant, subject to a nuclear license, not as a nuclear power plant but as a nuclear facility, and with a very demanding integration process into the buildings’ design. The consequence of having concentrated on the critical components, necessary to guarantee the feasibility of the project, but having overlooked the more conventional parts of the facility was an underestimation of cost, which essentially doubled after an in depth revision, and construction time.
Although the cost has been kept within reasonable bounds after the 2010 revision, suffering moderate increases but remaining within the limits of the originally foreseen contingency, the schedule seems difficult to control. Subsequent revisions of the baseline schedule have led to an estimate for the “first plasma”, which would mark the end of the construction period, to happen in 2022–23. This delay is the accumulation of several causes: lack of a finalized design, lack of manpower at the central team imposed by the budget restrictions, delays in Japanese components after the 2011 earthquake damaged some key facilities, additional licensing requirements derived from the post–Fukushima revision of all the nuclear procedures, etc., but a significant part of the delay comes from the extremely complex organization of the project and the distribution of roles and responsibilities. A typical example is when a component design performed by the central team is felt as an over specification which rises cost by the domestic agency in charge of the construction: the domestic engineers will come back with redesigns aiming to lower the cost while the central ones will be just worried by confidence in the functional role of the component, thus entering on a loop with no clear outcome. Many of those organizational problems have been highlighted by the management assessment report commissioned recently by the ITER Council.
In the meantime, the good news is that most of the high-tech components like the superconducting coils or the vacuum vessel, have undergone the final designs, with the corresponding design reviews, and the related construction contracts have been awarded to industry, which is so far progressing without known major difficulties. The major technical problem happened with the central solenoid superconducting cable, which in 2012 was showing degradation with operation time in the samples tested. Fortunately, further R&D by the Japanese team in charge of this cable provided, on time for the coils construction at the USA, a new design which was successfully tested without showing any degradation.
Other elements of confidence have been provided from the physics side by the research carried in supporting experiments around the world. As an example, one of the elements of concern with the original design of ITER, which used a carbon inner wall, was the problem of tritium accumulation in the form of hydrocarbons deposited in remote parts of the device, which could lead to the requirement to stop operation after every few experiments and undertake a complex tritium removal procedure. On the other hand, the use of carbon, due to its good behavior at high temperature and low atomic number, was capital for an efficient operation from the physics point of view and no clear alternative was at sight. Fortunately, tests of plasma operation with a full tungsten wall carried out in the recent years in the German device ASDEX-U and with a tungsten divertor and beryllium wall (the same combination of ITER) in JET have demonstrated reliable efficient operation without the tritium retention problem. Now ITER has changed its design and the lower part of the inner wall, the so called “divertor”, where the interaction with the plasma concentrates, will use tungsten as plasma-facing material.
In parallel, developments in the control of the periodic busts of power to the wall (the so called Edge Localised Modes, ELMs), the progress in the understanding of energy and particle confinement and its extrapolation to ITER size or the achievement of reliable operation at the high plasma densities projected for ITER reinforce our confidence in the operational success of the experiment.
Still some of the original concerns remain, for instance the need to avoid and mitigate the so called “disruptions”, rapid losses of confinement which could lead to damage of internal components, but progress is steady in all those fronts.
This situation, with organizational delays in one side but smooth progress in the most critical components, and physics projections in the other side, makes us to be relatively optimistic toward the actual success of the project and encourages us to work in order to find the right organizational frame to avoid further delays.
With ITER starting operation in 2023, the critical high-gain results with Q = 10 will come shortly before 2030. One of the answers we expect to get from these experiments is the efficiency of the plasma heating by the high-energy He ions generated at the fusion reaction, also called “alpha particles”. This is crucial for the future of the tokamak as a fusion reactor because we need to use these alpha particles to maintain the high temperatures which sustain the reaction. In the fusion reaction, neutrons carry 80% of the released energy, they cannot be retained by the magnetic field, and therefore they cannot contribute to sustain the reaction (in the power plant their energy will be extracted by the coolant and used to drive the turbines). Alphas will carry only 20% of the fusion power but they are charged particles which can be retained in the plasma by the magnetic field and contribute to sustain the plasma temperature. The problem is that, whereas the plasma particles have an average energy of 15–30 keV, the alphas are born with an energy of 3.5 MeV, hundred times higher, and they would escape quickly unless the energy transfer mechanism by means of collisions is efficient enough. Preliminary experiments in JET [6] as well as theoretical predictions show that, very likely, the alphas will indeed heat the plasma efficiently, but the ultimate test will be performed in ITER. With Q = 10, the power generated by fusion will be ten times the heating power injected externally, then the alphas, which carry 20% of this power will provide twice the externally injected heating, leading to a clear effect that will serve as a concluding test of the alpha heating.
ITER will demonstrate the scientific feasibility of fusion as energy source and will also test key technologies for the reactor but ITER will not yet be a real power plant. The main differences between ITER and a demonstration power plant, the so called “DEMO”, from “demonstration”, would be: tritium self-sufficiency, full plant energy efficiency, use of low-activation neutron-resistant materials, and reliable continuous operation. In the following pages, we will address the status and perspective of the related developments.
As explained in the introduction, fusion plants would need to generate in situ the tritium they will consume by bombarding lithium with the fusion-generated neutrons, through the reaction: n+Li6→T+He4. This function will be performed by the so called “breeding blanket” which will surround the plasma. The breeding blanket has also two main additional functions: to extract the power of the neutrons conveying it to the steam generators and the turbines and to shield the sensitive components, in particular the superconducting coils, from the neutron flux which would heat and damage them. This makes of the breeding blanket a very demanding nuclear component.
ITER will not be equipped with a full breeding blanket, and therefore it will not be self-sufficient in tritium. The current plan is to purchase the tritium that it will consume from an external source, essentially the Canadian nuclear fission programe, but it will have a number of smaller blanket modules which will be used to test different tritium-generation technologies.
The ITER blanket modules will test different options for the three main elements in a breeding blanket. First, there is the choice of breeding material, the main options being molten eutectic lithium–lead, with 90% Li6 enrichment or lithium salt pebbles (Li4SiO4 or Li2TiO3) with 30–60% Li6 enrichment. Secondly, we need a neutron multiplier, usually beryllium or the lithium–lead itself, because a fraction of the neutrons generated in the fusion reaction will fail to hit the breeding material. Fortunately, the neutron energy required for the breeding reaction is relatively low and using a neutron multiplier each single 14 MeV fusion neutron can generate several secondary neutrons able to produce tritium. The third element is the coolant, which must extract the energy deposited and generated in the blanket (the breeding reaction is exothermic), here the options are: water cooling, helium cooling, or dual cooling by helium and lithium–lead. [7].
The integral test of breeding blanket modules in ITER will be a crucial experiment in order to validate the different technologies. The strategic value of those designs is such that the breeding blanket program is not part of the ITER agreement, which foresees that all the knowledge generated in the project will be shared among the seven parties, but a separate activity whose results would be private intellectual property. It has to be coordinated due to the host role of ITER but the information obtained in the experiments will be sole property of the related party and in principle would not be shared.
The European Roadmap toward fusion electricity [8] includes a breeding blanket technology programe parallel to the preparation of the validation tests of the ITER blanket modules. The four technologies selected are the two which Europe will test in ITER, lithium–lead, and ceramic pebbles both cooled by helium, plus two additional options. The water cooled lithium–lead, as a shorter term option, has the advantage of avoiding the use of helium, which might become scarce if thousands of fusion plants need to use it, and the high cooling capacity of water; on the other hand, water generates corrosion problems as well as safety issues and its temperature operation window (280–325°C) is hardly compatible with the low-activation neutron-resistant materials which we have at hand today. The dual coolant, a longer term option uses a faster circulation of the liquid LiPb to use it as high-temperature coolant. The use of insulating inserts and an additional helium cooling system allow for the structural material to remain at lower temperature than the main coolant, which has a much higher temperature and leads to a higher plant efficiency.
The Q = 10 power gain of the ITER plasma will not be enough for having a real energy gain in the full balance of the plant, which must take into account all the energy consumption of the coils, cryogenic systems, and other auxiliary systems as well as the wall plug efficiency of the plasma heating systems and the efficiency of the thermal cycle. Overall, an efficient power station would require Q in the order of 50, which means a device with either a more efficient physics, a higher magnetic field (difficult to achieve due to the limitations in the superconductors) or a larger size.
Typical European designs of a demonstration fusion power plant [9] consider the total fusion power in the range of 2000 MW thermal and have a linear size, 1.5 times of ITER. A device with this size and power is a significant challenge, in particular, on what concerns the extraction of the power.
As explained before, the neutrons carry 80% of the power, they escape the plasma isotropically and cross the wall of the tokamak and are absorbed volumetrically in the coolant and blanket structures. The thermal power is very high but this broadly distributed load can be tolerated by the materials. The neutrons can generate a number of other issues in the materials but the thermal load is not a serious problem.
On the other hand, the remaining 20% of the power, carried by the alpha particles, together with externally injected power, also carried by charged particles, flows slowly toward the wall. The magnetic field can delay this flow but once the steady state is reached there is a continuous flux of energy toward the inner wall of the device.
All this power is conveyed to a small fraction of the wall, the so called “divertor”. This is necessary to avoid the penetration of sputtered wall particles into the hot plasma that would quench the high temperature but it generates a serious problem: all this power is deposited in a narrow, several cm, ribbon along the torus, leading to thermal loads in excess of 20 MW/m2, twice higher than the current engineering limit.
The possible solutions to what the experts see nowadays as the main challenge toward the success of fusion energy, operate from the two sides of the problem: cooling the plasma edge by emission of radiation in the visible and ultraviolet range, which distributes the load over a larger surface, and designing “divertor” geometries and materials which can handle the power.
Cooling of the plasma edge can be achieved by injecting gases like nitrogen, krypton, and argon [10], the goal is that a large fraction of the power is radiated at the edge while preserving the good core confinement. In addition, geometries which expand the interaction area in order to decrease the power density are being developed, like the “super X” [11] or the “snowflake” [12] divertors. The final tool to overcome this challenge is the choice of materials, the basic reference material is tungsten but liquid metal alternatives, using lithium, gallium, or tin, which offer “self-repairing” walls, are also being considered.
The power carried by the neutrons is not a big problem as explained above, however, the high fluence of energetic 14 MeV neutrons generates a different set of problems in the material. The problems will not be present in ITER, at least for the structural materials, due to the relatively low accumulated neutron fluence, but will be very severe for DEMO and for the commercial fusion plants.
Firstly, each neutron impact will give rise to a cascade of collisions which will displace many atoms from their positions. This is measured on “displacements per atom” or dpas, one dpa meaning that, on average, every atom within the material has been displaced once. The structural material of the blanket and first wall in a fusion reactor will suffer an excess of 100 dpas during the component lifetime, in addition, the 14 MeV neutrons, distinctly to the neutrons in a standard fission reactor, will generate transmutation reactions in the material which will produce helium and hydrogen and create blisters as well as material swelling. All these phenomena can degrade significantly the mechanical properties of the material, but there is one more adverse effect: the irradiated material becomes radioactive and will have to be treated as radioactive waste.
The materials which adapt best to the 14 MeV neutron bombardment are: vanadium alloys, titanium alloys, silicon carbide, a long-term promise but still difficult to use as structural material and, the current reference material which has achieved the highest technological maturity, the RAFM (Reduced Activation Ferritic-Martensitic) steels. As iron is relatively resilient to neutron bombardment and suffers little activation, RAFM steels, like the Japanese F82H or the European EUROFER, are based on the suppression of problematic impurities (Ni, Cu, Al, Si, Co, etc.) and the substitution of problematic alloying components (Mo, Nb) by other elements which play the same chemical role in the alloy but have a more benign nuclear behavior (Ta, W). RAFM steels would suffer less activation than the standard ones although they would still be an activated material after decommissioned from the fusion reactor. The current studies foresee that the components could be recycled after ~100 years under custody as medium-low level radioactive waste, as opposed to ~100,000 years for standard steel components under equivalent conditions. The possibility to further reduce this period depends on the level of impurity suppression technically, and economically, achievable. A fast activation decay is also observed for vanadium alloys [13] but vanadium currently lacks industrial development and has some negative effects, like corrosion, Tritium permeation, and narrower operating temperature.
One of the problems in the development of materials for fusion reactors is the absence of intense sources of 14 MeV neutrons which could allow us to test the behavior of the material in similar conditions to those in the fusion plant [14]. EUROFER has shown good performance under irradiation in fission reactors, which essentially reproduce the dpas effect but there is little knowledge about the effect of He and H accumulation.
One possibility is to use theoretical modeling of the irradiation effects. Activation is relatively easy to determine, as it essentially depends on the concentration of the different elements and neutron propagation calculations are possible. However, the structural changes are nearly impossible to compute starting from first principles: we are in a problem where the number of particles is in the order of Avogadros number and the changes must be tracked in picoseconds scale for periods of many seconds (which are the characteristic times of the changes in the mechanical properties). The modeling is performed using a multiscale approach, but the approximations are such that the experimental tests of every scale model as well as an overall test of the complete modeling are necessary.
A family of 14 MeV neutron sources under consideration is based on the use of reduced size fusion reactors with modest Q but with substantial DT reaction rate sustained by external injected power and equipped with a full breeding blanket in order to self generate the tritium. The so called CTFs (Component Test Facilities) belong to this family and there are several proposals under study both in the China and the USA [15-17].
The second family of sources is based on accelerator-driven neutron generation. For example, the reference proposal, IFMIF (International Fusion Materials Irradiation Facility) considers two 40 MeV deuteron beams of 125 mA each which hit a liquid lithium target producing a neutron spectrum very similar to that of a fusion reactor. IFMIF, a 1500 M€ project, would produce 20–50 dpa/year in a reduced volume of 0.5 L and smaller rates in the wider adjacent space. It is considered as the ultimate tool to qualify materials for the fusion power plants. Currently, Europe and Japan are carrying validation developments for IFMIF components and a complete accelerator with all the basic elements will be tested in Rokkasho (Japan) in 2017. The possibility to use this prototype accelerator in an early reduced version of IFMIF is currently gaining momentum. This source could be available by the early 2020s in order to qualify components at 20 dpa for an earlier phase of DEMO.
In the meantime, the fusion materials programe is strongly involved in the development of new materials and the consolidation of the reference ones, for example, one of the current limitations of EUROFER type steels is their reduced operation temperature window (350–550°C) which might be expanded by using ODS (Oxide Dispersion Strengthened) versions with yttrium oxide. Limited irradiation tests are also carried in fission reactors (use of boron doped material or the inclusion of some amount of 56Fe can simulate the He generation by 14 Mev neutrons) or using multiple ion beams to produce simultaneous dpas and He/H implantation, at a very fast rate but in very reduced sample volumes. Those experiments can complement the theoretical models as well.
RAMI (Reliability, Availability, Maintainability, and Inspectability) will be a key issue in a complex facility as a fusion power plant if we want it to operate under economically sustainable conditions. In particular, given the fact that the structure will become activated soon after the start of operation, most maintenance operations will have to be done by remote control manipulation. This means that all components inside the vacuum vessel and many of the components inside the cryostat, even for ITER, will have to be designed compatible with Remote Handling (RH) operations: size and weight of the components, assembly method, assembly sequence, interfaces with the RH tools…etc. Today, devices like JET have shown the feasibility of complex RH operations like the full substitution of the divertor or the first wall (see Fig. 3), however, the replacement times need to be significantly shortened in a commercial reactor and this would imply to evolve from todays man-driven operation to automatic operation for many of the actions.
|
Figure 3. JET remote handling system. |
A lot of technology development would also be required in plant systems: tritium extraction, isotope separation systems, He, and liquid metal heat exchangers, as well as advanced thermal cycles are among the systems which are currently being developed as part of the fusion technology programes worldwide.
The “tokamak” concept, on which ITER and most fusion devices are based, is a very clever design with optimal confinement properties. In this configuration, the confined plasma contributes itself to the construction of the confining magnetic configuration, this is achieved by inducing a strong electric current in the highly conductive hot plasma. With this contribution from the plasma, some complex additional magnets that otherwise would be necessary are spared. The current also contributes to heating the plasma by Joule effect. This solution offers some advantages and disadvantages as compared with the other family of devices, the “stellarator” which assumes no help from the plasma and configures the complete magnetic field by means of additional 3-dimensionally shaped magnets.
The advantages of the tokamak: comparative simplicity and very good confinement properties, makes this configuration the best option for a fusion ignition prototype like ITER or even a first DEMO device, however, the tokamak has also some limitations derived from the strong coupling of the plasma and its confinement. First of all, the plasma current (up to 15 MA on ITER) is usually induced with a transformer effect, which is impossible to sustain in steady state. Todays tokamaks are pulsed devices and this might have implications on the management of the supply to the electric network and the components fatigue when used as a power plant. Some progress has been achieved in the development of noninductive current drive systems, but there is still a long way ahead in the path toward the complete steady state. The second problem, derived from the plasma–confinement coupling, is the existence of scenarios where plasma and confinement drop suddenly together in a very fast positive feedback process (milliseconds) which ends in a tremendous thermal release to the wall and the quench of the >10 MA plasma current. In these events, called “disruptions”, very strong electromagnetic forces are generated and jets of fast electrons can achieve multi MeV energy, becoming a potential threat for the integrity of the internal components.
On the other hand, Stellarators are inherently steady state devices which could operate under stationary conditions for months and stop only for maintenance purposes. As the confinement is decoupled from the plasma, stellarators are also free of disruptions.
Stellarators are, in fact, older than tokamaks, first devices were developed by Spitzer in the early 1950′s, but the simplicity and good results of the tokamak soon relegated them to a secondary role. In the 1980′s, new design tools and constructive techniques, together with the introduction of radiofrequency plasma heating systems which could substitute the traditional Joule heating based on plasma current, allowed for a relaunching of the stellarators and the results from devices like the German W7AS and the Japanese LHD, a superconductor-based device, have shown the strong potential of this configuration, overcoming the main limitations in confinement that hindered the progress with earlier devices. In 2015, the large superconducting stellarator W7X (Fig. 4), currently under construction in Greifswald (Germany), will start operation. The results of this experiment might strongly reinforce the potential of the stellarator as a long-term option for the commercial reactor units, on which the engineering complexity will play a secondary role compared with the simplicity and smoothness of the operation.
|
Figure 4. Assembly process of the W7X stellarator. |
The 14 MeV DT fusion neutrons can be used to irradiate uranium 238 or thorium 232 and generate fissile material, which could be used either in a pure fission reactor, in this case the fusion system would be a way to produce fission fuel, or in the fusion reactor blanket playing the role of energy amplification. The same DT neutrons could be used to just irradiate and “burn” the nuclear radioactive waste accumulated during the complete history of fission energy generation.
Those three applications have intermittently gained and lost attention since the idea was conceived in the 1950′s and later relaunched by H. Bethe in 1979 [18]. In principle, a fusion gain Q = 5, complemented with a 10× amplification from the fission blanket would suffice for having an efficient power plant, which means that from the fusion side, a device like ITER, and even a bit smaller, could do the job. Those who support the idea see as the main advantage the simplification of the fusion core and a faster process toward energy generation. For those opposing, the hybrids just bring together all the problems of fusion, in particular complexity, and fission: less waste but still significant, proliferation, handling of highly active material. A very interesting discussion, which includes the opinion of a “skeptics group”, can be found in ref. [19]. Currently, there is no effort on hybrids in the European Roadmap, which focus on “pure fusion”, but there are active groups in China and the USA and significant activity and interest have been reported by the Russian programe [20].
The parallel effort of ITER and the technology programes should converge in the construction of a DEMO power reactor. The concept of DEMO varies in the different world programes and it is not even clear whether DEMO would be a single worldwide collaborative experiment like ITER or several competing developments running in parallel in different countries, looking for a leading position in a phase where the economic profit of fusion might be at sight.
The European DEMO concept [21] sees the device as the last experimental facility before industry takes the lead in the construction of commercial fusion plants. As described above, it should be self-sufficient in tritium, use advanced low-activation materials and provide in the order of 500 MW of net electricity to the grid during operational periods of several weeks.
With the ITER high Q experiments foreseen for the late 2020s and the results of the 20 dpa materials irradiation available by the same dates, the DEMO construction could start by the mid 2030s and should be able to start net electricity generation before 2050 (Fig. 5). By that time, we expect that the materials irradiation facility IFMIF would have been built and provided the necessary data for full qualification of low-activation structural materials under >100 dpas. Those data might also be complemented with results from the current projects for CTF. From this point, we will enter the situation where private investors and industry will engage in the construction of the first commercial plants. When will this happen is difficult to predict, it will depend on the energy market situation and the overall energy supply scenario, but given the size and potential profits of the energy market, (the full cost of ITER construction, estimated 12–15 billion euro, is about the cost of one single day of worldwide energy consumption) we expect that this might be a relatively fast development, leading to a significant share of fusion in the energy mix during the second half of the century.
|
Figure 5. European Fusion roadmap [8]. |
None declared.
Published on 01/06/17
Submitted on 01/06/17
Licence: Other
Are you one of the authors of this document?