Linear Chemistry Systems
The science of chemistry chemistry, the whole thing is analysed in terms of systems thinking over three pages of this web book. This second page deals with linear (predictable) chemistry systems.
The first question is:
- Why are there any linear chemistry systems at all? Why do gases behave in such a way that the Ideal Gas Equation, PV = nRT, is linear?
The Ergodic Hypothesis, The Principle of Equal a priori Probability & The Correspondence Principle
It is a fundamental assumption
of statistical mechanics & kinetic theory that a closed system with many degrees of freedom, such as a gas in a closed container, ergodically samples all equal energy points in phase space.
hypothesis [and here],
states that the time averaged behaviour of microscopic quantities
atoms & molecules gives the same result as the macroscopic
"ensemble" average, where the "ensemble"
is a collection of all the possible states that than an assembly of molecules
would reach in an infinite amount of time.
In other words: When a chemical system
is at equilibrium the time-average is equal to the ensemble-average.
- Due to the astonishing
size of Avogadro's
number, 6.022 x 1023, the techniques of statistical mechanics can be used
to explain and derive chemical thermodynamic principles, and therefore understand
the bulk behaviour of gases, liquids, solids, solutions & mixtures.
are very complicated on the microscopic scale, however, the Navier-Stokes
equations depend only upon the density and viscosity of the
fluid to describe bulk behaviour. The chaos of the individual molecular
trajectories disappears on large scales.
- For a step-by-step look at the philosophy and logic underpinning kinetic theory, look here.
The ergodic hypothesis was
important in the 19th century, early in the development of statistical
mechanics. These days the generally accepted
of equal a priori probability [and here],
states that for a quantum system the probability of an entity being found
in any particular quantum state is equal to the probability of that entity
being found in any other quantum state, with the constraint that the total
energy must equal the sum of the energies of all the particles.
The principle of
equal a priori probability is time independent and it
can be used to derive the Boltzmann distribution. The implication is
that the time dependent ergodic hypothesis is, strictly, not necessary.
But, as discussed in this link,
it is currently not possible to mathematically prove [in an elegant
manner] the correctness of the principle of equal a priori probability.
correspondence principle first invoked by Niels Bohr in 1923
states that the behaviour of a quantum mechanical system will reduce
to classical physics when the quantum numbers are large. This means
that: either some quantum numbers of the system are excited to a very large
value or the system is described by a large set of quantum numbers, or
ergodic hypothesis, the principle of equal a priori
probability and the correspondence principle "explain"
(give a theoretical and philosophical underpinning and justification
to) the ability to extrapolate from the world of microscopic chemical
species and quantum mechanics atoms, ions, molecules to the macroscopic world
of bulk substances and materials.
classical physics "works" because simple physical "laws" emerge on large scales:
Dilute, Homogeneous, Ideal Solutions
Linear behaviour is generally
observed in homogeneous [stirred] solutions with concentrations ranging
from 0.1 mol per litre (mol dm3) or less to "infinite"
dilution (but not past the Avogadro dilution limit
where statistically no analyte species are present).
Ideal dilute solutions
are associated with:
theory, the Beer-Lambert
law, the osmotic
law, the Nernst
(pH) and the solubility of salts in the presence of common
ions can be predicted.
Many analytical techniques
exploit the linear, predictable behaviour of the dilute solution regime.
The procedures are surprisingly general:
- First a representative
set of samples for analysis are obtained.
- If the samples
are solids, such as rock, pharmaceutical tablets or biological specimens,
the samples are ground to a fine powder or otherwise homogenised.
- The analyte species
are dissolved and/or extracted into a suitable solvent, the choice of
which is often crucial.
- The dilute analyte
solutions are filtered and assayed by one of many techniques: titration,
optical absorption, electrochemical potential difference, NMR, etc.
If the analyte is a mixture, the analysis may be preceded by chromatography
- The assay results
are compared against a calibration curve, which should be a straight
line, prepared from samples of the same analyte and of known concentrations:
- Note that non-linear
effects can arise from limitations in the analytical technique which
are independent of the linearity of the ideal dilute solution. For example,
pH probes behave poorly in alkaline solutions and UV/vis optical adsorption
detectors are susceptible to stray light and other errors.
The question of where dilute,
homogeneous solutions can be found in chemistry space is addressed towards
the end of this web page, here.
phenomena are commonly found in dilute homogeneous solutions. It follows
that when looking for linear behaviour, experiments should be conducted
in the dilute homogeneous solution regime whenever possible.
Chemists are familiar with
idea of thermodynamic "open systems" and "closed systems",
where (for example):
A jar half-filled with solvent
and with the top on is a closed system, but when the top is removed
the system becomes open and the solvent is free to evaporate.
phase boundary is seething with activity: at the surface of
the liquid individual solvent molecules continuously leave and enter
the gas phase, before and rejoining the bulk liquid. In the closed
jar this activity is at dynamic equilibrium with equal numbers of
molecules moving in each direction. In the open jar, more solvent
molecules enter the gas phase and then leave the confines of the jar
than rejoin the liquid phase.
- Only closed systems can
be perfect and completely uncomplicated.
- Only closed systems can
exhibit dynamic equilibrium.
- Open systems can never be
at true equilibrium, although careful experimental design may approximate
to closed. For example, we can measure the pH of a 0.0100 molar acetic
acid solution in an open flask because the time constant for acid-base
equilibrium is many orders of magnitude faster than evaporation of water
and acetic acid from the flask.
- Only mathematicians and
physical scientists have access to true closed systems. All biological
systems are open. (The closest exception to this statement
is the amazing Ecosphere,
but even the Ecosphere requires light.)
- Generally, chemical systems
are dynamic, even though they may appear static to the untrained eye.
Equilibrium chemistry involves
understanding how chemically dynamic systems behave when a parameter:
temperature, pressure, concentration or catalyst, is changed. For example,
what will happen to the equilibrium position of a particular gas phase
reaction if the pressure is increased while the temperature remains constant.
With a profound understanding
of matter, heat and energy, a group of 19th century scientists
(Lord Kelvin) & Boltzmann
developed mathematical models: the first,
second and third laws of thermodynamics and kinetic theory, that
describe how chemical systems equilibrate.
So successful was this approach
that Einstein was later to say: "thermodynamics is the only
physical theory of universal of content I am convinced will never be
Theoretical ideas in thermodynamics
are readily transferred to engineering. A classic and historically important
example of this technology
push is the Haber-Bosch
process for the synthesis of ammonia, NH3. This
was the first large scale industrial chemical reaction process to be performed
under high pressure conditions:
Qualitative predications in
equilibrium thermodynamics space can be made using Le
Châtelier's principle, a very useful "trick" for making
predictions about the law
of mass action.
Likewise, the phase
rule can be used to predict how gases, liquids and solids coexist
and equilibrate in phase space.
On the subject of phase,
students of chemistry often think (and are taught) that "physical"
phase change processes like melting, boiling and sublimation, are somehow
different from "chemical reactions", like the reduction of
benzene to cyclohexane.
as shown in the equations below, both the evaporation of water and the
catalytic hydrogenation of benzene to cyclohexane can be described by
chemical equations that are identical in form: both reaction
systems can be balanced in terms of mass, enthalpy & entropy and
Gibbs free energy, so they are far more similar than they are different:
Seen in this light, evaporation
and hydrogenation are both are phase change processes. Hence, it
is very difficult and not very useful to distinguish between
"physical" and "chemical" change. It is better
to regard any phenomena that can be described in terms of a chemical equation
as representing change in phase space.
chemical phenomena are commonly found in systems where the phase space
is approaching, is close to, approximates to, or is at thermodynamic
equations are a concise way of showing changes in the phase space.
Electron and Proton Transfer
As discussed elsewhere in the
chemogenesis web book, here, all reaction
mechanisms can be broken down into STAD (substitution-transfer-abstraction-displacement)
Special amongst all the various
possible types of STAD step are electron, e, transfer
and proton, H+, transfer. This is because because in these
two cases the transferring moiety is:
- very small
- low mass
As a result e
and H+ transfers have few kinetic constraints (at least, at
room temperature and in homogeneous solution) so a reaction's thermodynamic
equilibrium position is rapidly reached. As a consequence, a huge amount
of chemistry can be predicted using tabulated data and simple relationships.
transfer equates with oxidation and reduction (redox) chemistry.
Much redox chemistry can be predicted using the electrochemical
series , a scale in which the standard hydrogen electrode is defined
as having an E° of 0.00V. The associated Nernst
equation models variations in concentration and temperature.
Proton, H+, transfer equates with Brønsted acid/base chemistry, here.
Hydrogen ion concentration [H+] and pH are precisely defined measures
of acidity and alkalinity. The pH of a wide variety of aqueous solutions,
including buffer solutions, can be calculated from a knowledge of an
acid's Ka or pKa. The logic extends
to non-aqueous systems and is routinely employed by synthetic chemists
who have a range of super acids and super bases at their disposal.
Electron and proton transfer
reactions can be described in terms of the full equation, the net ionic
equation and the half reactions. The half-reactions can then be arranged
in a standard form:
transfer and proton transfer processes are often predictable.
The Periodic Table
In the first half of twentieth
century, much effort was expended trying to make the periodic table of
the elements axiomatic, in other words, trying to fully understand
the Mendeleev system in terms of a deeper theory, that deeper theory being
quantum mechanics. Paul
claimed this situation had been fully and completely achieved in principle:
physical laws necessary for the mathematical theory of a large part
of physics and the whole of chemistry are thus completely known and
the difficulty is only that the exact application of these laws leads
to equations much too complicated to be soluble." P.A.M. Dirac, Proc.R.Soc.Lond.Ser.A
123 (1929) 714
Dirac was obviously
brilliant man he devised the relativistic wave equation and predicted
the existence of the positron but he was not a complexity
We certainly teach our school
and university students that the periodic table is explained in terms
of electronic theory, and this line is advanced elsewhere in this web
The argument proffered is that:
the pattern of spectral lines experientially obtained from a sample gas
phase atoms (mono atomic) of an element, can be "explained by"
(mapped to) quantum mechanics in the form of the Schrödinger
wave equation, and the spectral lines and quantum patterns
obtained by experiment and theory can be mapped to the Mendeleev
Periodic Table of the Elements.
Much of this logic in some detail appears on the HyperPhysics site, here.
The relationship between electronic
theory and atomic spectra is linear in the sense that there is
a [close to] one-to-one mapping between theory and experiment, in exactly
the same way that there is a one-to-one mapping between behaviour of a
real gas in a piston and the ideal gas equation, PV = nRT. However,
there is one essential difference: atoms, spectral lines and solutions
to the wave equation (wavefunctions) are discrete quantised entities,
whereas the classical behaviour of a gas in a piston is continuous.
periodic table of monatomic gas phase elements is axiomatic linear
with respect to theory in that theory can predict behaviour and
behaviour can be explained in terms of theory.
Contrary to what we teach, Eric
Scerri disputes that the full and complete axiomatic mapping between
theory and the periodic table has been achieved. With good evidence
he is able to say: "Electronic configurations are not [fully]
reduced to quantum mechanics nor can they be derived from any other
theoretical approach. They are obtained by a mixture of spectroscopic
observations and semi-empirical methods like Bohr's aufbau scheme".Read
the paper online: Eric
R. Scerri, Has The Periodic Table Been Fully Axiomatized?, Erkenntnis,
47, 229-243, 1997.
One reason for the
discrepancy between the full mapping we teach our students, and the
incomplete mapping that worries Eric Scerri, this author and others concerns multi-electron atoms, ions and molecules.
The point is that
the Schrödinger wave equation can only be solved analytically for
one electron systems like the hydrogen-atom, H, and other one
electron systems: He+, Li2+, Be3+,
etc. For all multielectron systems, approximations in the math have
to be made to deal with electron/electron interactions and correlation.
The mathematical techniques employed are usually pragmatic rather than
rigorous, and they are often semiempirical (partially based on experimental
data). The effect is to produce nice fast computer code that efficiently
predicts molecular energies, geometries and spectra, but at the expense
of the theory being fully axiomatic: the logic becomes blurred.
Periodicity and Congeneric Series
Mendeleev's periodic law stated:
"The properties of the elements are a periodic function of atomic
This text has since been modified
to: "The properties of the elements are a periodic function of atomic
number" because atomic number (or proton number) is a more fundamental
property than atomic mass.
Periodic trends can be addressed
with a general diagram of the type:
Note that the periodic
table can be formulated in various ways,
and some more conducive to showing periodicity than others, hence the
highly generalised version given above.
Periodicity is covered
in some detail here,
and there is a nice page of periodicity links here.
Periodicity is usually discussed
with reference to the chemical elements in the form of gas phase, mono
atomic species, Na, Cl, etc., (this certainly the case for ionisation
energy data), and sometimes as simple ions, Na+, Cl,
etc. In a few cases the periodic analysis extends to monoelemental molecules,
for example the physical and reaction chemistry of the diatomic halogens:
F2, Cl2, Br2
One of the aims of this Chemogenesis
web book has been to extend the idea of periodicity. This has been
achieved by employing a combinatorial analysis of the main group elements,
normalised as the corresponding main group elemental hydrides, here.
One of the benefits of this new analysis is that periodicity has been
naturally extended into molecular and ionic space by identifying "congeneric"
dots, series, planars & volumes of isoelectronic, isostructural &
Chemical Thesaurus Reaction Chemistry Database
lists more than 100 congeneric series, planars and volumes.
and congeneric structural and reactivity trends are linearities in reaction
linearities are an echo of the patterns that exist in the electronic
structure of atoms; they
are fossils from an otherwise unseen quantum world.
Material Types: Elements & Binary Compounds
The chemical elements and binary
elemental compounds present as real materials which may be: metallic,
ionic, molecular covalent or network covalent, or intermediate between
these four extreme types. Bonding and material type can be mapped to the
van Arkel-Ketelaar triangle, here,
and Laing tetrahedron, here:
The bulk properties of elemental
and binary materials can be "largely explained" in terms of
electronegativity, van der Waals radii and ad hoc valency ruled
bases on Lewis theory, and there is "good" – but NOT perfect – correlation between
theory and material type.
Materials are commonly and rather successfully
modelled in terms of molecular
mechanics, a development of valence shell electron pair repulsion
alloys, ionic materials, ceramics, polymers, bond polarity, molecular
dynamics, crystal structure, etc., can all be "rather successfully"
modelled using classical structural theory based on empirical evidence:
atomic/ionic radii, bond lengths & angles, and electronegativity.
is no need to resort to quantum mechanics, although this more sophisticated
approach is available when the classical approach breaks down... which
it invariably does.
Functional Groups, Homologous Series, Chemical Automata & Gliders
A computer based cellular automata
use simple rules to turn squares on a screen on and off. Conway's Game
of Life, above has two rules:
If a square is
off, it turns on if exactly three of its neighbours are on. And, if
a square is on, it stays on if exactly two or three neighbours are on,
otherwise it turns off.
and four types of ending. An
on screen object may:
for a few generations, then becomes static and unchanging.
- Becomes a "blinker".
- Grow for a few generations and then shrink,
self-destruct and vanish.
- Eject a "glider", a Game of Life term,
which moves away from the place where it formed until it collides with
the edge of the playing area.
As a first approximation,
atomatom bonding is governed by two [sets of] rules:
ionic, covalent bond character is determined
by (maps to) electronegativity
Valency, in the form of VSEPR theory, determines
Atomic interactions can be
regarded as a type of cellular automata, and they too exhibit various
common "end games".
- Consider a system with two
hypothetical atoms, X and Y, which have electronegativity values of
between 2.0 and 3.5 (This is so all bonding is unambiguously covalent.
- The atoms can have valances
of 1, 2, 3 or 4, and all the various valency combinations occur.
- There are many exceptions
and special cases, as discussed in more detail here,
but commonly the interactions either give rise to either molecular materials
or to network covalent materials:
good examples: SO, O2
However, the analysis
used on the Laing tetrahedron page, here,
had the crucial constraint that compounds should only possess one
type of chemical bond so that methane, CH4, was
allowed, but that ethane, CH3CH3,
and propane, CH3CH2CH3,
which have both CC and CH bonds, were excluded.
Once the single-bond-type-constraint
is removed the number of structural possibilities explodes! This is illustrated
with the hydrocarbons, a set of molecular species containing only hydrogen
and carbon, and where hydrogen has a valency of one and carbon has a valency
The simplest hydrocarbon
is methane, CH4. Next comes ethane, CH3CH3,
then propane, CH3CH2CH3.
As the number of four valent
carbon atoms increases, the connectivity rules allow for the existence
of "isomers", which are compounds with the same molecular
formula but with different structural formulae (a different connectivity).
For example, C4H10 can exist
as either butane, CH3CH2CH2CH3
or methylpropane (isobutane), CH3CH(CH3)CH3.
The number of possible structural
isomers increases rapidly with molecular size:
of Structural Isomers
Note that as molecular
size increases some of the isomers will be structurally impossible
for steric reasons.
are "[sets] of compounds in which each member differs from the
next by a constant amount" (Morrison & Boyd 5th Ed).
The "constant amount"
is usually a CH2 (methylene) function,
and the simplest, archetypal, homologous series is observed with the set
of linear alkanes:
Some fantastic patterns can
emerge in this space. Consider the set of branched alkanes:
The structural formulae
associated with the homologous series make a really rather cool pattern,
so it is a great pity that only the C1 and C5 substances are known,
Number of Carbons
Teachers, an interesting "lateral thinking" homework assignment. Get your students
to deduce a formula to calculate the number of carbon atoms in the
above homologous series. A spreadsheet giving the author's method
is available on request.
One of the ideas central to
organic chemistry is that of functional groups being "attached
to" or "superimposed upon" a hydrocarbon sigma-skeleton.
Examples include: alcohol, OH, aldehyde, CHO and carboxylic
acid, COOH, functions. When functional groups are appropriately
attached to linear alkanes, new homologous series are produced:
1-propanol, 1-butanol, 1-pentanol, 1-hexanol...
methanal, ethanal, propanal, butanal, pentanal, hexanal...
methanoic acid, ethanoic acid, propanoic acid, butanoic acid, pentanoic
acid, hexanoic acid...
Crucially, homologous series
exhibit patterns of behaviour in terms of: boiling point, solubility,
partition coefficient, reactivity, etc., which correlate with chain length
and molecular weight.
series are "gliders" (in the Game of Life sense) which project
genuine linear structure and interaction behaviour into reaction chemistry
The ability to scale up reaction
chemistry is extraordinary. If a chemical reaction takes in the lab using
one gram amounts of material, we can say with great confidence that the
reaction will also work out on the chemical plant (in the factory) on
a multi tonne scale.
1.0 gram of cyclohexanol
(GFW 100.2) contains 6 x 1021 molecules, and 1.0 tonne of
cyclohexanol contains 6 x 1027 molecules, both are huge numbers
and correspondingly the the entities exhibit very similar behaviours.
Thus, the oxidation of cyclohexanol to cyclohexanone proceeds in a similar
manner on both gram and tonne scales, a scale up factor of a million.
The counter example
should make this point clear. The bridge designer cannot simply design
a new bridge ten times bigger than the last one by multiplying all the
dimensions by a factor of 10, let alone a million!
There are several provisos:
- Reagents which
may seem moderately priced in the lab, may not be economically viable
on a large scale.
- Exothermic reactions
may need to be cooled, and/or the rate of addition of the reagents need
to be carefully monitored and controlled. With exothermic reactions
there is always the risk of reaction "runaway" leading to
- There will be health
and safety issues.
- Waste must be safely
- There are a number
of common lab techniques and pieces of equipment do not scale
- Filtration can
be a problem at any scale, and on a bigger scale it will be a bigger
- There is no ton
scale equivalent of the rotary
evaporator, one of the most useful bits of kit in the synthetic
is an excellent analytical technique, but industrial scale preparative
chromatography always gives problems.
In his 1986
Pulitzer prize winning book The Making of The Atomic Bomb,
tells the story of how plutonium, Pu-239, became the preferred nuclear
explosive over uranium 235.
Uranium is mined,
but plutonium is formed in a nuclear reactor. However, it is easier
to chemically extract Pu from the zoo of other elements present
by selective precipitation and extraction techniques, than it is
than to physically separate the U-235 from U-238 by physical means. In other
words, the chemistry scales better than the physics!
had prepared a few micrograms of plutonium and measured the neutron
cross section and so determined its suitability as a nuclear explosive,
and on the basis of these findings President Roosevelt committed
the USA to develop an industry that was larger than the US auto
industry at the time. The scientists and engineers had little
doubt that the various chemical processes could be scaled up by
a factor of 1012 to produce the necessary ton or so
of Pu-239 required for testing and production of the first nuclear
note: Actually, Pu-239 and U-235 productions routes were carried
out in parallel. The Trinity test, the "fat
man" Nagasaki bomb and nearly all subsequent nuclear
tests and deployed weapons have used (and continue use) Pu as
the primary fissile material. The Hiroshima "little
boy" bomb used U-235 and a 'gun-type' assembly and is one of very U-235 devices ever detonated.
The reason why chemical reactions
scale so well is that they are inherently parallel processes in
which billions upon billions of independent chemical species interact
and react [more or less] simultaneously.
Conversely, most engineering
processes are serial, with production one-at-a-time at a time on an assembly
line. Houses, bridges, cars, computers are made in a serial way.
A notable exception is seen
with the manufacture of the silicon microprocessor chips that power so
much modern technology. Here hundreds of microprocessor, memory &
logic components are made in parallel on large silicon wafers, which may
be up to 300mm in diameter.
This author sees
the biggest issue facing nanotechnology concerns scale-up. Go to the
abstract page of the journal NanoLetters
and take a look at recent advances. Ask yourself how well the experiments
discussed will scale. I suggest some will scale well, but many won't.
principle and practice, chemical reaction processes can be scaled up from
atom + atom to multi tonne.
Structure Activity Relationships (QSAR)
"A QSAR [project] attempts
to find consistent [linear] relationships between the variations in the
values of molecular properties and the biological activity for a series
of compounds, so that these rules can be used to evaluate new chemical
entities". From an excellent
Introduction to QSAR Methodology on the Network
Finding a linear relationship
between molecular structure and pharmaceutical activity is the Holy Grail
of drug discovery research.
exist in pharmacology space in terms of dose-response and structure-activity,
but these are outside the scope of this webbook.
Molecular biology appears to
use chemical interaction in a very "digital" way. DNA replication
& repair, protein synthesis, hormone/receptor & protein/protein
interactions, the polymerase chain reaction, the passing of nerve signals
across synaptic junctions, etc., etc., etc., all involve molecular
recognition. The processes seem to proceed by a kind of molecular-clockwork
designed and maintained by a Blind
Briefly: DNA codes
for mRNA, then at the ribosome mRNA codes for the sequence of amino acids
which make up the peptide chain and primary protein structure. There is
a formal correspondence between the four letter language of the generic
code the 20 letter code leading to amino acid sequence and protein structure.
These processes seems
to have more in common with a computer program than compressing a sample
of gas in a piston to check the validity of the ideal gas equation, PV
It seems [and here
the author is writing as a spectator, not a participant] that biological
systems use "analogue" mechanisms, such as sensing and responding
to concentration gradients, in ways that are of secondary importance compared
with "digital" DNA replication and protein synthesis.
Biological chemistry, and even
biology, can be very reproducible, linear and predictable: Yeast cells
replicate to (what appear to be) identical yeast cells, and yeast cells
ferment aqueous sugar solutions to ethanol with high reproducibility.
linearities are associated with the pattern recognition logic of molecular
biology. These linearities are emergent.
Linear Chemistry: Summing Up
regions of behaviour in chemistry space are so incredibility important
to science, technology/engineering and education that practitioners tend
to concentrate on the linear regions. The effect is that chemistry space
appears to be more linear and predictable than it actually is.
and philosophers of science are interested in linear behaviour because
linear data can be converted to models, theories and laws, and this
profound knowledge is used to help explain the world we inhabit.
and technologists are always keen to exploit regions where behaviour
is stable and predictable. Bridges are seldom designed to be on point
of collapse, and likewise, chemical engineers develop processes which
maximise yield and minimise the risk of explosion.
concentrate on linear regions of chemical science because of their theoretical,
practical and educational importance.
|Chemistry & Complexity: Systems Thinking
Chemistry & Complexity: Non-Linear Systems
© Mark R. Leach 1999-
Queries, Suggestions, Bugs, Errors, Typos...
If you have any:
Suggestions for links
Bug, typo or grammatical error reports about this page,
Mark R. Leach, the author, using firstname.lastname@example.org
This free, open
access web book is an ongoing project and your input is appreciated.