A view from Paul Giroux’s presentation on “Building Hoover Dam: The Men, The Machines, and The Methods”

Paul Giroux on civil engineering and construction.

Mr. Giroux, from Kiewit, presented the history and civil engineering marvel of the Hoover Dam.  The presentation centered on the excavation and concrete work done by the Six Companies on the Boulder Canyon Project from 1931-1936. Giroux provided an interactive presentation filled with historical pictures of the men working, the machines involved and the engineering plans that brought the project together.  The presentation introduced the engineering and construction leaders of the day and laid out their challenges.  In addition, it provided a detailed engineering construction approach to the concrete work and technology of the time.

Mr. Giroux began his presentation with a brief history of the U.S. politics of the 1930s.  He then introduced the Boulder Canyon Project Act of 1928 which authorized the construction of the Hoover Dam.  The massive civil engineering feat was to be constructed in Black Canyon on the Colorado River between Arizona and Nevada.  The structure required 3,250,000 cubic yards of concrete and is listed as one of our modern world marvels.

According to Mr. Giroux, the engineering plans were drawn and reviewed by the Bureau of Reclamations.  The three major portions and challenges of the project were the diversion of the Colorado River, the construction of the concrete-gravity arch dam and the hydropower plant. Mr. Giroux, in order to provide further knowledge, provided a brief biography of the men in charge of the engineering and construction.  He introduced Raymond F. Walter, the Chief Engineer of the Bureau of Reclamations, as the key engineer in the design process and construction bid collection analysis. He also mentioned Frank T. Crowe, the project superintendent, chosen for this work on the nation’s largest dams at that time and John Savage, the engineer in charge of project supervision and onsite design.

The construction bid went to The Six Companies for $48 million which is said came within $25,000 of the estimate made by the Bureau.  In order to begin construction, the City of Boulder was developed to house and maintain the community of working men.  Once the crews were established, the construction team had 28 months to divert the river by excavating four tunnels.  Each tunnel has a 56 foot diameter and together a combined length of approximately three miles.  The construction method approach was to drill in the explosives, active them and then haul off the soils and rock.  This method required the use of 25 machines and 1,500 men. Once a tunnel was completed, the concrete work began.  To provide the necessary material, aggregate, processing and concrete batch plants were developed on low and high ground.  Low ground plants were used for the tunnels and base structure.  The high ground plants for the rising structure.  Several rail lines were also constructed to bring in additives from Las Vegas and the surrounding towns.  The concrete pour for each tunnel was divided into 40 foot sections and were composed of the tunnel floor, sidewalls and top arch.  To hold the volume of water expected during the flood season, the tunnels were designed be three feet thick.  The tunnel concrete lining reduced the tunnel opening to 50 feet.

Once the tunnels were completed, river diversion and construction of the upstream and downstream cofferdams began.  To meet the project milestones, work had to continue 24 hours a day in three consecutive shifts.  The size of each cofferdam was twice the base structure size of the Hoover Dam, a conservative was measurement made to prevent flooding of the construction site and injury to the men.  Once the pumping of water was completed the form work of the base structure was started.  The concrete gravity arch dam was to be built in sections.  To tackle the problem of concrete shrinkage, a method for removing the heat produced by the concrete needed to be developed.  Engineer John Savage proposed the winning design in which one inch pipe was weaved into the sections and concrete pour around it.  To lower the thermal stress and expedite concrete shrinkage, river water and then ice water was pumped throughout each section.  As the shrinkage settled, grout was pumped under pressure and the pipes filled.  Two 20 ton cableway cranes were used for the placement of concrete in the actual dam. Mr. Giroux said the civil engineering marvel of the Hoover Dam does not stop at the massive tunnel excavation and concrete pour.  The construction of the hydropower plant proved to take longer than the construction of the dam itself. The power house holds 17 turbines, all of which had to be assembled on site.

In concluding his presentation, Mr. Giroux gave a brief history of the labor laws of the time and commemorated the lives lost in the pursuit of restraining the Colorado River.  He then proceeded to answer questions regarding the curing process of the dam, which he said is still curing.  Comparisons were then made between dirt dams, concrete gravity arch dams and buttress dams.

For further reading consider the following:

  • Dolen, T.P, P.E., “Advances in Mass Concrete Technology-The Hoover Dam Studies”, Concrete Technology, ASCE, Pages 58-73, 2010.
  • Rogers, J.D., P.E, “Hoover Dam: Evolution of the Dam’s Design”, Structural Engineering and Hydraulics, ASCE, Pages 85-123. 2010.
  • Bartojay, K P.E., Joy, Westin, “Long-Term Properties of Hoover Dam Mass Concrete”, Concrete Technology, ASCE. Pages 74-84. 2010.

A view from Dr. Alan D. Wright’s presentation on “Wind Turbine Advanced Controls”

Alan Wright earned his B.S. and M.S. from Oregon State University and his Ph.D. from the University of Colorado.

Dr. Wright on control system applications.

Modern Control Theory guest speaker Dr. Alan D. Wright of the National Renewable Energy Laboratory introduced his work as the study of control systems in the application of wind turbines. He states the goal he and his team are pursing is to modify the operating state of a turbine.  The main research focuses on three bladed wind turbines, though some two bladed turbines are also monitored.

The control system consists of a number of sensors, actuators, and a hardware/software system that processes the input signals from the sensors to generate output signals for the actuators. The actuators control the pitch motors.  The motor control actions are nacelle yaw, generator torque and blade pitch.  Dr. Wright stated the generator control is a fixed-speed induction generator in which the rotor-speed variations are small.  The induction generator has frequency converters and or power electronics depending on the turbine type, can maintain a constant generator torque and control the torque to any desired value.  The trade-off is the rotor speed can vary significantly. Dr. Wright introduced pitch control as the focus of his research section.  He said pitch control regulates the aerodynamic power or torque of the system and is a fast control that can interact with the dynamics of the system.  The pitch is used to feather or stall the blades.

There are two operating regions of control, one at low wind speeds and one at high wind speeds.  The control goal is to maximize energy capture at low winds and limit the power at high speeds. To maximize the energy the controller is base lined with a constant pitch .  The tower damping controller then uses the perturbations in the generator torque and blade pitch to add to the baseline. In the case where power needs to be limited, the independent blade pitch control allows for speed regulations, tower fore –aft damping and wind-shear load mitigation.  The generator torque allows for the tower side to side damping and drive-train torsion damping.  Therefore, there are two separate control loops in the system.

Dr. Wright continued to show a general block diagram with wind disturbances on the linear turbine and the control gain variations curves that provide control design points for different pitch-torque scenarios.  To further demonstrate the impact the controller gains have on the system, Dr. Wright presented a graph demonstrating the effects of proportional and integral control on the rotor speed.  Though the settling time proved to be nearly the same the rise time varied. His team is therefore using full state feedback control to regulate the rotor speed in the presence of wind speed disturbances and to stabilize the turbine modes.  The full state feedback allows for the stabilization of the flexible modes while the use of state estimation provides the controller with non measureable states and accounts for the uniform and linear shear wind disturbances.

Dr. Wright concluded his presentation by stating his team had designed and tested two separate control loops based on the advanced control methods of independent pitch control and generator torque control.  In independent pitch control, they studied speed regulations at high wind speeds, active tower fore-aft damping and asymmetric shear mitigation.  In generator torque control, they studied active tower side to side damping and active drive train torsion damping. They found it was necessary to refine the generator torque control loop to account for actuator delay and to penalize control at harmonics of the rotor-speed.  The results showed decreased loads in the drive-train, tower, and blades. The team is looking towards seeing the effect multiple wind turbines would have on each other as in the case of a wind farm.  In addition, the team is looking forward to research of offshore wind farms.

For further reading consider the following:

  • Ullah, N.R., Bhattacharya, K., Thiringer, T., “Wind Farms as Reactive Power Ancillary Service Providers—Technical and Economic Issues”, Energy Conversion, IEEE Transactions on, On page(s): 661 – 672, Volume: 24 Issue: 3, Sept. 2009
  • Ullah, N.R., Thiringer, T., “Variable Speed Wind Turbines for Power System Stability Enhancement”, Energy Converstion, IEEE Transactions on, pages 52-60, Volume: 22 Issue: 1, Feb. 2007
  • Masters, G., “Wind Power Systems”, Renewable and Efficient Electric Power Systems, Edition 1, pages 307-383, IEEE 2005

A view from Dr. Dudley Herschbach’s presentation on “High Pressure Chemistry: Making Gas by Squeezing Wet Rocks”

Dudley R. Herschbach earned a B.S and M.S. from Stanford University and his M.A. and Ph.D. from Harvard Univeristy.

Dr. Herschbach on gas production by high pressure chemistry.

Dr. Herschbach, who won the Nobel Prize in 1986, began his talk on high pressure chemistry by providing a brief introduction to the ranges of pressure.  He started with the approximate pressure at the center of the Earth to the pressure of hydrogen gas in outer space, at which point he introduced his main topic, the study of high pressure chemistry.

According to Dr. Herschbach, high pressure chemistry is a field most notably researched by Russel J. Hemley. Dr. Hemley began by extending chemical physics techniques in high-pressure diamond anvil cell experiments and later expanded his research to matter physics, material sciences and planetary sciences.  Much of the works Dr. Herschbach commented on showed that matter, regardless of its makeup, follows the rules of physics. He showed that as energy increases and its volume decreases all matter eventually reaches the same universal pressure incline.  Graphical proof was provided by means of the energy pressure diagram of H2 and H2+ which displayed their graphs converging towards the same asymptote.

It seems pressure has been a topic of research since Galileo’s time. Dr. Herschbach presented a notable hypothesis, that of Thomas Gold’s, who believed that hydrogen gases where pushing up towards the Earth’s surface near earthquake faults.  He launched a major research drilling operation to prove his hypothesis.  However, after many failed attempts in Scotland his research was met with skepticism as only minor reservoirs of oil where found. Thomas Gold later modified his hypothesis in The Deep Hot Biosphere suggesting that oil came from bacteria and natural gas and its production the result of high pressures.

Dr. Herschbach stated some of the underlying assumptions of deep earth gas theory, they are: 1) hydrocarbons are primordial, 2) hydrocarbons are subsequently not fully oxidized, 3) hydrocarbons are stable at great depths and 4) deep rocks contain pores. His line of interest is in extracting gas from wet rocks and in theory it follows that as pressure increases and depth increases, temperature increases. In squeezing wet rocks, you increase the pressure, increasing the temperature and therefore evaporating the liquids into gas. He presented evidence by providing a Raman spectroscopy of the microscopic view of the diamond anvil cell showing bubbles of CH4 forming at high pressures.  He also stated that high pressure chemistry has found some organisms still alive in high pressures environments that were once believed to kill everything off.

Dr. Herschbach continued his talk with the pressure workings of a water pump and discussed the development of the barometer.  This included a brief history on Evangelista Torricelli, an Italian physicist and mathematician of the 16th century.  He later elaborated the rules of physics concerning pressure by explaining the reasons a tanker truck could be crushed by atmospheric pressure alone if the internal pressure fell below atmospheric. His final pressure demonstration was that of placing a whole boiled egg inside a bottle.  This demonstrated the effects of a pressure difference and how in reaching equilibrium the egg was sucked into the lower pressure zone of the bottle.

In conclusion, Dr. Herschbach introduced some of the world’s leading experts in the line of high pressure chemistry, provided theories proving and disproving hypothesis in the development of gas.  To provide a glimpse of the development of pressure research, Dr. Herschbach took his audience through the ages by guiding us through the pressure related works of physicists such as Galileo Galilee and Evangelista Torricelli.

For further reading consider the following:

  • Text Book: Holzapfel, W.B.,  Isaacs, N. S., High Pressure Techniques in Chemistry and Physics: A Practical Approach. June 1997.
  • Publication: Tatsumi, D.L., Hamilton, R.W., Nesbitt, Chemical characteristics of fluid phase released from a subducted lithosphere and origin of arc magmas: Evidence from high-pressure experiments and natural rocks, Journal of Volcanology and Geothermal Research, Volume 29, Issues 1-4, September 1986, Pages 293-309
  • Publication: Wang, K., Duan, D., Wang, R., et al.,  Stability of hydrogen-bonded supramolecular architecture under high pressure conditions: Pressure-induced amorphization in melamine-boric acid adduct, Langmuir, 25, 4787-4791 (2010).

 

A view from Dr. Anna M. Michalak’s presentation on “Towards a Global Carbon Monitoring System: Assimilating Environmental Data in a Geostatistical Framework”

Anna M. Michalak earned her B.S. from the University of Geulph, Ontario and her M.S. and PhD. from Stanford University.

Dr. Michalak on monitoring carbon emissions.

Dr. Michalak introduced her line of research as developing a method for more accurately monitoring global carbon emissions. She began by briefly summarizing the carbon cycle and stated that humans release eight Gig-tons of carbon per year of which four Giga-tons remain in the atmosphere each year.  The total CO2 released throughout the year is composed by that released by oceans, vegetation and humans.  Dr. Michalak is studying the residue left in the atmosphere after carbon is exchanged between the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere. Her motivation for pursuing  a global carbon monitoring system centers on finding a reliable method of modeling carbon exchange; currently an uncertainty associated with the future of natural carbon sinks and a major source of uncertainty in future climate projections.

Her research is driven by a new US carbon cycle science plan which asks how does the natural process and human actions affect the carbon cycle? How does US policy and management affect the quantity of carbon emissions? And how are ecosystems impacted by increasing greenhouse gasses? With these questions in mind, Dr. Michalak began her study by performing a carbon flux diagnosis, measuring flux net Eddy covariance at data gathering towers throughout the nation.  This data, however, is representative of an area one 1 kilometer squared and represents a measurement of the exchange of carbon, not a measure of the concentrations of CO2 present.

An inverse model is developed from the data gathered coupled with geological measurement to determine where the source came from. This requires knowledge on wind currents, temperature variances, rainfall etc. The strategy with which she approaches inverse modeling is: understanding the scale-dependence of processes controlling CO2 flux, spatial-temporal variability, and how this would help reconcile top-down and bottoms-up estimates of CO2 exchange by minimizing priori assumptions. As an example of the high uncertainly observed from inverse models, Dr. Michalak shared 17 different models all derived from the same data.  From these models it was observed that all provided different estimates of the levels of global carbon exchange.  To obtain a more reliable estimate, a biospheric model and transport model combination was use to achieve inverse modeling of the carbon flux estimates in her research.  Modeling the carbon flux for the US required the need to understand the carbon concentrations that enters from Asia and Mexico.   It is a basic understanding of the spatial distributions of CO2 at a monthly, seasonal and annual scale. To further understand and validate the inverse model, scale dependence is observed and each variable’s importance identified by incorporating each variable into the equation one at a time and analyzing its impact on the model.

After discussing the results of various scaled inverse models, Dr. Michalak concluded that over the years there has been an unprecedented interest in the global carbon cycle and that a global carbon monitoring system capable of tracking the carbon cycle and anthropogenic emissions must be strong data driven, taking advantage of all available data.  This means that the system must be informative and be driven by process understanding of CO2 flux, minimizing the reliance on assumptions and not relying on one specific individual, model or inventory.

For further reading consider the following:

  • Paper: Whittaker, S., et al., IEA GHG Weyburn CO2 monitoring & storage project summary report 2000-2004: from the proceedings of the 7th International Conference on Greenhouse Gas Control Technologies, September 5-9, Vancouver, Canada, Volume III, Petroleum Technology Research Center 2004.
  • Journal: Janssens, I. A., Freibauer, A., Ciais, P., et al., Europe’s Terrestrial Biosphere Absorbs 7 to 12% of European Anthropogenic CO2 Emissions, Science, 6 June 2003: Pages 1538-1542.
  • Journal: Penner, J.E., Eddleman, H., Novakov, T., Towards the development of a global inventory for black carbon emissions, Atmospheric Environment. Part A. General Topics, Volume 27, Issue 8, June 1993, Pages 1277-1295

 

A view from Dr. Joel R. Sevinsky’s presentation on “Microbial Ecology of Coal Bed Methanogenisis: A Molecular Biologist’s Point of View”

Joel R. Sevinsky earned his B.A. from the University of California, Santa Barbara and his PhD. from the University of Colorado, Boulder.

Dr. Sevinsky was a Senior Principal Investigator of LUCA Technologies. He stated their goal is to create real-time microbial methane gas in existing coal beds at economic rates and volumes in the anticipation of providing a sustainable energy. In his presentation he provided background information on coal bed membranes, challenges encountered, and their approach to community profiling.

Dr. Sevinsky’s research centered on identifying coal beds needing microbial community restoration and introducing a mixture of amendments to promote the production of methane. This means that in the study of coal bed membranes, microbes are introduced and transported to promote methane production. The microstructure of coal allows for micro-organisms to live in the fractures of the coal.  Each coal well removes water to reduce the underground pressure and promote the release of methane from the coal.  Gas move up the well and out to the compressors. Methane depletion is experienced once the levels of methane extracted falls below profitable margins. At this point it was usual to abandon the well and drill a new one.  LUCA Technologies is taking abandoned wells, prior to being decommissioned by collapsing the well and filling it with concrete, and studying the microbial ecology to introduce a mixture of microbes to restore the community of methane producing organisms.

LUCA system’s proprietary mixture of amendments is gravity fed into coal steam. Though the percentages of each mixture are in house information only, the ingredients used for restoration can be found on their website.  The study of the coal beds restored provides a promising outlook for the ability to create a sustainable energy. However, the challenge is a constant struggle to identify the cause of the biodegradation of each coal bed and develop a microbial community capable of maintaining methane production. Each coal substrate is undergoing different levels of biodegradation in producing methane and it is in the biodegradation state when microbes are introduced as a catalyst for methane production. Dr. Sevinsky identifies their approach to community profiling to be a five step process: 1) collect micro-organisms, 2) isolate DNA, 3) sequence DNA, 4) examine/record type and micro-organisms, and 5) compare microbial community. The microbial catalyst mixture will have the most profound effect on the rate of production when the quantities needed are properly identified.  However, this is a continuous process as the microbial communities are constantly changing in order to obtain the desired methane yield.

Collecting the micro-organisms is accomplished by retrieving water from the coal bed.  Isolating the DNA is the critical part as DNA has variable regions for micro-organism identification. Once the DNA is sequenced and the organisms identified, community profiling is realized through pyro-sequencing. UniFRAC is a tool for phylogenetic beta diversity in which the phylogenetic tree is created to develop distance matrix and determine the cluster of environments.  This portion requires high data computational analysis. Throughout his research, Dr. Sevinsky has found that in pre-restoration there are well to well differences in baseline community structures but that there is no one specific micro-organism that will help restore the community with respect to bacteria.  In post-restoration, restoration with a specific amendment will create specific community profiles. At this point, Dr. Sevinsky introduced several scatter plots of the weighted UniFRAC diversity matrix of bacterial community profiling of the Powder River and of well community profiles sequencing DNA.

In concluding his presentation, Dr. Sevinsky brought forth other areas of consortium research: stable isotope probing for dissection of metabolic pathways, model compound enrichment, and enrichments in colony isolation.  He also stated the reasons for funding the research as: a target for restoration strategies, tools for real time monitoring, and tools for asset evaluation.  Which raises the question of is there a methanogenic community? And what restoration strategies should they prioritize? He closed his presentation by stating that modern applied microbiology is a multidisciplinary science and played a video of well methane bubbling out of water @ 900 ft below ground to provide insight of their typical field work.

For further reading consider the following:

  • Journal: Jones, E. J.P., Voytek, M. A., Warwick, P. D., Corum, M. D., Cohn, A., Bunnell, J. E., Clark, A. C., Orem, W. H., Bioassay for Estimating the Biogenic Methane-Generating Potential of Coal Samples, International Journal of Coal Geology, Volume 76, Issues 1-2, 2 October 2008, Pages 138-150, ISSN 0166-5162.
  • Paper: Kakadjian, S., Garze, J., Zamora, F., Enhanching Gas Production in Coal Bed Methane Formation with Zeta Potential Altering System, SPE Asia Pacific Oil and Gas Conference and Exhibition, 18-20 October 2010, Brisbane, Queensland, Australia, Society of Petroleum Engineers.
  • Journal: Strapoc, D., Picardal, F. W., Turich, C., Schaperdoth, I., Macalady, J. L., Lipp, J. S., Lin, Y.S., Ertefai, T. F., Schubotz, F.,  Hinrichs, K.U., Mastalerz,  M.,  Schimmelmann,  A., Methane-Producing Microbial Community in a Coal Bed of the Illinois Basin, Applied Environmental Microbiology, Volume 74, 15 April 2008, Pages 2424-2432

A view from Dr. Baron Peters’ presentation on “Simulation Methods for a Mechanistic Understanding of Nucleation and Polymorph Selection”

Baron Peters earned his B.S. and M.S. from the University of Missouri and his PhD from the University of California-Berkeley.

Dr. Baron on simulating nucleation and poly-morph selection.

Dr. Peters introduced three main subjects of this study: Polymorphism, Mitosis, and Laser Induced Nucleation.  Polymorphism was described in terms of Stranski and Totomanov’s theories on nucleation. Mitosis centered on the interfacial free energy that must be overcome in order to promote growth.  Lastly, laser induced nucleation was introduced as a new study on the effects of optical Kerr on nucleation.

Classical nucleation theory estimates the interfacial free energy barrier to nucleation by treating the phase nuclei as the core surrounded by metastable liquid. Dr. Peters stated his challenge was finding a method for calculating the free energy at a nano-scale and maintaining a constant chemical potential in the transition phase of nucleation.  To Dr. Peters, this means determining the appropriate strategy through an anecdotal or mechanistic insight. His approach leans towards a mechanistic view of polymorph selection in which Brownian dynamics and the Yukawa potential are the key to identifying and promoting solid nuclei clusters.

In studying mitosis, he found that the free energy landscape has only one channel towards polymorphism rather than two as traditionally believed. This posed the question that if there is only one channel, how do we get two polymorphs? Dr. Peters states that from an unstable liquid the polymorph nuclei takes the fastest route to stability depending on its particle content and energy level of its interfacial free energy barrier. Using a 2D-steady Smoluchowski model with a 2D-Kramers crossover the polymorphic pathway can be determined due to the particle dynamics of nucleation size overpowering the selection.  Dr. Peters used two methods for estimating the free energy. The traditional approach involves the use of the Frenkel Defect but leads to large scale calculations for each particle insertion.  Therefore, Dr. Peters states he is using umbrella sampling for explicit solvent systems in lattice form from which he can monitor the structure of the polymorph.  This gives him a base for adding particles to change the polymorphic structure.

Not all polymorphs undergo nucleation within a reasonable time frame and some need weeks to undergo nucleation. This is where Dr. Peters proposed laser-induced nucleation.  Discovered by Myerson and Garetz, laser-induced nucleation expedites solid nuclei clusters into a metastable phase through the optical Kerr effect. Dr. Peters’ hypothesis on the effects of optical Kerr is that the laser’s wave frequency is too fast to orient the chemical potential of the solvent. However, the induced electric field torque of the laser aligns the poles of the nuclei clusters fast enough to promote crystallization.  His computational method involved the development of hybrid Potts lattice gas models to describe the two steps of nucleation under the semi-grand canonical Monte Carlo form. To test his hypothesis, Dr. Peters used sparking water as his solvent.  With the laser he examined the optical Kerr Effect on the carbon dioxide molecules and observed crystallization occur in a medium that would otherwise not undergo nucleation.  He believes the Kerr Effect promoted bubble formation along the laser light which in turn caused crystal nucleation.  Questioning the results, does the absence of absorption bands imply non-photochemical nucleation?  Dr. Peters continues to study this subject to provide a concrete answer.  In concluding this presentation, Dr. Peters touched upon the importance of thermo, dynamics and specific size metrics in nucleation and polymorph selection.  His research is funded by NSF and the Los Alamos National Laboratories.

For further reading consider the following:

  • Paper: Progress On Polymorph Selection: Structure Specific Coordinates and Dynamics of Structure Formation by Baron Peters, Chemical Engineering, UC Santa Barbara, Santa Barbara, CA
  • Journal: Towards Multiscale Modeling in Product Engineering by Zdzislaw Jaworski, Barbara Zakrzewska, Computers & Chemical Engineering, Volume 35, Issue 3, 8 March 2011, Pages 434-445
  • Journal: Molecular Mechanism for the Cross-Nucleation between Polymorphs by Caroline Desgranges and Jerome Delhommelle, Journal of the American Chemical Society 2006 128 (32), 10368-10369

A view from Dr. Donna Riley’s presentation on “Energy and Justice: Making the Connections in Engineering Thermodynamics”

Donna Riley earned her B.S.E from Princeton University and her M.S. and Ph.D. from Carnegie Mellon University.

Dr. Riley on recognizing the gaps in engineering.

Her motivation is bridging the gap she sees between the study of engineering and its application to real world events. She defined energy as a basic human need with choices, global impacts, full of policy and macro-ethical issues. She distinguished thermodynamics as being the study of fossil fuels, heat engines, with a focus on detail, yet lacking input or the study of policies and ethical concepts. In general, thermodynamics is esoteric compared to energy.

Therefore, she questions what should student know about energy.  Should they be introduced to current and emerging technologies? If so, what percentage of the curriculum should be devoted to it?  Traditional courses cover heat engines, heat transfer, fluid dynamics and energy storage but these are all part of the study of energy. Energy, she argues, is more than the study of its fundamental components.  It should incorporate policy analysis with decision making in technical choice, fuel sources, co-construction and macro-ethics.  Justice and energy go hand in hand with decisions on accessing resources, the distribution of use and demand and has ecological consequences. Dr. Riley supports there should be a strategy to aiding students to recognize what they are and are not being taught in their academic courses. To bridge this gap, Dr. Donna Riley introduced her latest research project and textbook companion to address the non-technical ABET material.

As most projects, Dr. Riley’s venture began through research of different adaptations of critical pedagogies; testing them with focus groups and obtaining feedback through group and individual interviews. She centered her book on the objective of taking knowledge, skill and integrating them into innovative pedagogies of independent learning. Her textbook covers energy and thermodynamics through subjects rarely covered in traditional engineering classes: ethics, history, US policy, international perspectives, risk, reliability, safety and sustainability. The book is therefore structured in three parts: foundation, making theory relevant, and application.

As with any project, Dr. Riley encountered challenges. She recounts her struggle in determining who and what was to be included and excluded from her text. In introducing the history of thermodynamics, she would be bringing to light challenges to assumptions made throughout thermo and attempting to cover the dynamic energy policies of the state. Her greatest challenge has been introducing liberal pedagogies that shift the power of education from the professor to the student.  In bridging the gap, the text goes through the global and US demands of energy, its impact on climate changes, making relevant the 1st and 2nd law and property relations to current events while providing applications in new and old world technology choices and usage sectors. Its modules are structured to engage, analyze, reflect and change the traditional approach to thermodynamics. That is why she proposes and answers four questions of Justice:

  • How can thermodynamics and its canons include voices long silenced in history?
  • How is the climate changing and whose responsibility is it?
    1. North vs. South: analyzing the ethics of the Copenhagen (non)-agreement from a variety of philosophical standpoints and examining its stakeholders.
  • How does the US food policy affect the poverty levels?
  • What consequences can be seen from energy technology applications?
    1. Military conflicts and environmental effects.

In the supplemental material she will be publishing, Dr. Riley hopes to provide the means for moral reasoning, critical thinking, social engagement, organization and communication. Though she has yet to settle on a title for her textbook, for further learning, the following texts can help introduce concepts in energy and the justice system.

The Finnie Model of Erosive Wear

The following model is what I use for evaluating wear in DEM. It’s based on the work of Iain Finnie.

Finnie I., Erosion of Surfaces by Solid Particles. Wear, 3(2):87-103, 1960.

The wear model I use for granular contact is based on two parts. The first part is the calculation of the number, direction (angle of impact) and velocity of the particles striking the ductile surface.  Brittle materials can see fractures upon impact and require a different approach to the mathematical model that is solely based on the material properties and will not be discussed here.

The model assumes ductile material so the model will calculate the amount of surface material removed based on the particle trajectories. Note: the model though it calculates the volume of material removed, the particle velocities are dependent on the material size which is an estimate to begin with so the effect of particle size on erosion is relatively uncertain and is a qualitative approach to wear.

Materials such as steel undergo wear by a process of plastic deformation in which the material is removed by the displacement or cutting action of the eroding particle. The assumptions behind the model are that there is a ratio between the normal force component and the shear component of constant value K.  This is true if the particle rotation during the cutting/impact is fairly small. In addition, it is assumed that there is a ratio between the depth of contact l and the depth of cut y_t as seen in the figure below of value \psi.

finnie

Wear, 3(1960) 93: Idealized picture of abrasive grain striking a surface and removing material. Initially the velocity of the particle’s center of gravity makes an angle \alpha with the surface.

 

Further assumptions require that a constant plastic flow stress p  is reached immediately upon impact.  This is for the traction analysis so that the particle cutting face is of uniform width which is large compared to the depth of cut. The volume of material removed by the particle is then taken as the product of the area swept out by the particle tip and the width of the cutting face.

The cases under which material is removed are as follows:

  1. The particle comes down as a low angle, cuts out part of the surface and then leaves again which means the depth of cut goes to zero as the particle departs.
  2. At the higher angles the horizontal motion of the particle ceases before it leaves the surface so the cutting stops when the velocity goes to zero.

For (1) think of a particle rubbing along the surface when impacting at low angles and the particle striking the surface head on and creating craters for high angle impacts (2).

Integrating over the duration of impact or cutting period provides the following expressions for the volume removed:

Q = \frac{mV^2}{p \psi K}(\sin 2 \alpha - \frac{6}{K} \sin^2 \alpha)          if \tan \alpha \leq \frac{K}{6}

Q = \frac{mV^2}{p \psi K}(\frac{K \cos^2 \alpha}{6})          if \tan \alpha \geq \frac{K}{6}

The first equation is for low angle impact, the second for high angle impact. The variables are defined as follows:

Q = the volume of material removed

m = the mass of the particle or effective mass of the particles impacting the surface

V = the velocity of impact

p = constant of plastic flow stress

\psi = ratio of depth of contact to depth of cut

\alpha = angle of impact

K = ratio between the normal force and the shear force

Down side to this model is that the formulation assumes the particle impacts a smooth surface every time when in reality the surface wastes away by previous impacts and becomes rough.  Since the surface roughness increases throughout the duration of the impact period there is a correction made to the model. This correction is made under the observation that the surface roughness increases with each impact.  Therefore, in an impact with a rough surface more material is removed than from a smooth surface.  The second observation is that not all particles that impact the surface will remove material in an idealized manner and can sometimes not remove material at all.  Thus by inspection of erosion craters due to a known number of abrasive grain cuts, the volume removed is assumed to be 50% of the predicted erosion.

In addition,  \psi , is assumed to be 2 from metal cutting experiments, according to Finnie.  This leaves one variable left for the user to define (K).  If using K = 2 as approximated from angular abrasive grain erosion tests, the corrected volume removed is approximately defined as follows:

Q \approx \frac{mV^2}{8p}(\sin 2 \alpha -3 \sin^2 \alpha)          \alpha \leq 18.5 ^{\circ}

Q \approx \frac{mV^2}{24p} \cos^2 \alpha          \alpha \geq 18.5^{\circ}

Maximum erosion has been observed in the impact angle range of 15-20° so the estimated maximum volume removal is given by

Q \approx 0.075 (\frac{MV^2}{2}) \frac{I}{p}

which is 7.5% of the particle’s kinetic energy divided by the flow pressure of the material.

What the I use as an input to the DEM engine is the K variable relating the normal force to the shear force.  This is also a measure of the shape of the material impacting the surface.  If the material is near spherical the K value will increase, for material that is abrasive (has a rough surface) the K value will be small.

Most abrasive material has a K value in the range of 1.5 to 2.5.  Perfectly spherical material would have a K value of 5.0.  Studies show that K is approximately 2 for angular abrasive grain. As most material modeled with spherical particles is rough, the K value should be on the lower end of the K value range.

Contact Area of Particle-Plane Impact

 

A particle of radius R impacts a wall with some velocity, V_{impact}. The penetration length is defined as \delta. The contact area radius is defined by a.

Using Pythagorean Theoreparparimpactm, the contact radius is defined by:

a^2 + (R - \delta)^2 = R^2

a^2 = R^2 - (R-\delta)^2

a^2 = R^2 - (R^2 -2R \delta + \delta^2)

a^2 = 2R \delta - \delta^2

If \delta is small then \delta^2 is even smaller and we may omit it. Therefore, a^2 = 2R \delta and the cross sectional area is defined by:    A_0 = \pi a^2 = 2 \pi R \delta.

 

 

Deriving the Damping Coefficient (part 2)

Condition 1: At time = 0 and position y = 0

condition1

y(0) = e^{- \xi *0} \{Acos( \tilde{ \omega} *0) \ + \ Bsin(\tilde{ \omega} *0 )\}

 Acos( \tilde{ \omega} *0) \ = \ A(1) \ = \ 0

Therefore: A = 0

To use the condition of V_0 \ = \ \dot{y} at t = 0, we need to take the first derivative of the position equation

y(t) = e^{- \xi t} \{Bsin( \tilde{ \omega} t)\}

\dot{y}(t) = e^{- \xi t} \frac{d}{dt} \{Bsin( \tilde{ \omega} t)\} + \{Bsin( \tilde{ \omega} t)\} \frac{d}{dt} e^{- \xi t}

\dot{y}(t) = e^{- \xi t} \{ \tilde{ \omega} Bcos( \tilde{ \omega} t )\} - \xi \{Bsin( \tilde{ \omega} t) \} e^{- \xi t}

\dot{y}(t) = e^{- \xi t} B \{ \tilde{ \omega} cos( \tilde{ \omega} t )- \xi sin( \tilde{ \omega} t) \}

V_o = e^{- \xi *0} B \{ \tilde{ \omega} cos( \tilde{ \omega} *0 )- \xi sin( \tilde{ \omega}*0) \}

V_0 = B \{ \tilde{\omega}\}

B = \frac{V_0}{\tilde{\omega}}

This brings our position and velocity equations to:

y(t) = \frac{V_0 e^{- \xi t}}{\tilde{\omega}}sin(\tilde{\omega}t)

\dot{y}(t) = \frac{V_0 e^{- \xi t}}{\tilde{\omega}}\{\tilde{\omega} cos( \tilde{\omega}t) - \xi sin(\tilde{\omega}t)\}

The goal of solving this system is to extract the damping coefficient. To do this, evaluate the system at the instant the particle rebounds from the ground.

condition2

These conditions are: y = 0, t = t_f, \dot{y} = V_f

First we need to determine t = t_f

condition2b

The equation for position is a sine wave of the form: y = C \ \ sin(\tilde{\omega}t)

sinewave

The sine wave has a period T and the time elapsed during that period is T/2, where

period, T = \frac{2 \pi}{\tilde{\omega}}

Therefore,

time, t = \frac{T}{2} = \frac{\pi}{\tilde{\omega}}

Solving the velocity equation at the determined conditions: t = \frac{\pi}{\tilde{\omega}}, \dot{y} = V_f

y(t) = \frac{V_0 e^{- \xi t}}{\tilde{\omega}}sin(\tilde{\omega}t)

V_f = \frac{V_0 e^{- \xi \frac{\pi}{\tilde{\omega}}}}{\tilde{\omega}}\{\tilde{\omega} cos( \tilde{\omega}\frac{\pi}{\tilde{\omega}}) - \xi sin(\tilde{\omega}\frac{\pi}{\tilde{\omega}})\}

V_f = \frac{V_0 e^{- \xi \frac{\pi}{\tilde{\omega}}}}{\tilde{\omega}}\{\tilde{\omega} cos( \pi) - \xi sin(\pi)\} = V_0 e^{- \xi \frac{\pi}{\tilde{\omega}}}

Using this information for V_f and V_0 combined with the linear definition of the coefficient of restitution, e, which is given by:

e = \frac{V_f}{V_o}

e = exp^{- \xi \frac{\pi}{\tilde{\omega}}}

Bringing out system to its initial terms: m, k, c

ln(e)=- \zeta \omega_0 \frac{\pi}{\omega_0 \sqrt{1 - \zeta^2}}

ln(e)=- \zeta  \frac{\pi}{\sqrt{1 - \zeta^2}}

\sqrt{1 - \zeta^2} ln(e)=- \zeta \pi

(\sqrt{1 - \zeta^2}ln(e) )^2= (- \zeta \pi)^2

(\sqrt{1-\zeta^2}) [ln(e)]^2 = \pi^2 \zeta^2

 [ln(e)]^2 - \zeta^2 [ln(e)]^2 = \pi^2 \zeta^2

\pi^2 \zeta^2 + \zeta^2 [ln(e)]^2 = [lkn(e)]^2

\zeta^2 (\pi^2 + [ln(e)]^2) = [ln(e)]^2

\zeta^2 = \frac{[ln(e)]^2}{\pi^2 + [ln(e)]^2}

Recall:

damping ratio, \zeta = \frac{c}{C_{crit}} \rightarrow \frac{c}{2 \sqrt{km}}

 (\frac{c}{2 \sqrt{km}})^2 = \frac{[ln(e)]^2}{\pi^2 + [ln(e)]^2}

 \sqrt{(\frac{c}{2 \sqrt{km}})^2 }= \sqrt{\frac{[ln(e)]^2}{\pi^2 + [ln(e)]^2}}

 \frac{c}{2 \sqrt{km}}= \frac{ln(e)}{\sqrt{\pi^2 + [ln(e)]^2}}

c = \frac{2\sqrt{km}ln(e)}{\sqrt{\pi^2 + [ln(e)]^2}} = \frac{2\sqrt{km}ln(\frac{1}{e})}{\sqrt{\pi^2 + [ln(\frac{1}{e})]^2}}

This defines the viscous damping coefficient.