Return to Philosophical
Primer
Cause and Effort
Contents
Introduction
This paper is the third of a series [in need of much
correction and editing]. In the first
paper of this series, I sketched out the philosophical basis and conceptual
structure of space and time. In the second,
I attempted to give an account of Quantum Mechanics. In this paper [still
under construction], I shall explore: the Principle of Least Action;
Efficient and Final Causes; Singularities in Newtonian Mechanics; Quantum
Indeterminacy; and human FreeWill. My purpose is to develop an understanding
of what Free Will might be and to indicate how it might be compatible with
causality. These papers are much more physics based and mathematical than
any I have previously posted on my Web Site. I beg the indulgence of those
readers for which these fields are foreign. I shall attempt to make the
issues intelligible. I have appended a Bibliography so that those who are
interested in acquiring a deeper familiarity with the issues here discussed
may have some idea how to set about doing so. I wish to acknowledge helpful
discussions with my mathematician friend: Simon James and philosopher friend:
Paul James. Of course, they should not be held in any way accountable for
the speculations and proposals made here!
The Principle of Least Action
Lagrangian Mechanics
In classical Physics, Lagrangian Mechanics is derived from Newton's Second
Law. In brief, this integral formulation of mechanics states that: if it
is known that a particle (or more generally a system:
a set of particles) starts at a place x_{1} at some
time t_{1}, and ends at a place x_{2} at some
time t_{2}, then the trajectory that it follows x(t) between
these times is given by:
[x_{2}, t_{2}]
x(t)
: ^{d
}S
L
(x,
dx/dt,
t) . dt = 0
[x_{1}, t_{1}]
where:
L (x, dx/dt,
t) = T
(x,
dx/dt, t)  U
(x,
dx/dt, t)
and, obviously:
[x_{2}, t_{2}]
S
dx/dt
. dt = x_{2}  x_{1}
[x_{1}, t_{1}]
L is the "Lagrangian", T is the familiar Kinetic Energy and U is the
equally familiar Potential Energy.
In other words, the physical trajectory, x(t), either minimizes
or maximizes the path integral of the Lagrangian. In colloquial terms,
any system is
lazy: it evolves in such a way as involves the "Least
Action", where the action is defined as the path integral of the Lagrangian.
The following points should be stressed:

The Lagrangian is the difference between the Kinetic and Potential
Energies, not their sum.

The system is not presumed to conserve energy.

The principle of Conservation of Energy emerges from Lagrangian
(as Newtonian) Mechanics.

Given that Energy is in fact conserved, it follows that the path integral
of the Kinetic energy alone is also stationary.

The path integral of the velocity is, of course, fixed at x_{2}
 x_{1}, whatever the path.

Hence, if the velocity, dx/dt of one part of the trajectory is reduced,
that of another must be increased.

Therefore, the Kinetic Energy profile T(t) is far from arbitrary.

Note that the integral is with respect to time. This becomes the
"proper time" in the Relativistic Formulation.

All pairs of (x_{1}, t_{1}) and (x_{2},
t_{2}) are valid. Even if a large potential barrier impedes the
trajectory, an initial velocity sufficient to overcome this barrier can
always be conceived. The start and end conditions of the trajectory do
not determine T(0).

As the two times get closer, the pair (x_{1}, t_{1})
and (x_{2}, t_{2}) become equivalent to an initial
position and velocity.

Unless U is a very strange function of x, as t_{2 }
t_{1} gets small, x(t) becomes a straight line and dx/dt
a constant.

The Principle of Least Action can then be used to evaluate d^{2}x/dt^{2}
and so recover Newton's Second Law.
While Lagrange's Principle of Least Action can be shown to be entirely
equivalent to Newton's Mechanics, the Action is a far from intuitive concept.
The questions arise:

Why should the Action be minimized?

What if more than one path exists which minimizes the Action?

How does the system know how to move at the start of the trajectory?

It might be that the trajectory that initially minimizes the action

is incompatible with a later path segment that is absolutely required

if the whole of the path integral of the Action to be minimized!
The Path
Integral formulation of Wave Mechanics
A better basis for the Principle of Least Action is obtainable within Wave
Mechanics. The Green's
Function, Impulse Response or Propagator for the Wave Equation with
a constant potential, U, can easily be written down. It is composed from
a set of outward bound spherical wave or Hankel function. In the
nonrelativistic case, these are:
h^{+}_{0}(qrwt)
= (e^{i (qrwt)
})
/ (qr  wt) .
Because it is the impulse response, it must satisfy the property that.
The sum,
taken over a wavefront surface at some time, G(t),
of all the impulse responses
to the original impulse response
is equal
to the original impulse response:
G_{0}(U;r',t';r,t) =
S
G_{0}(U;r',t';r",t")
. G_{0}(U;r",t";r,t) d^{2}r"
: for any and all t"
4p;
r" = vt"
Where v is the wave velocity, here assumed a constant. This is nothing
other than Huygen's principle, which may be familiar from Optics. More
generally:
G(r',t';r,t)[V] = S
G(r',t';r",t")[V]
. G(r",t";r,t)[V] d^{2}r"
: for any and all t"
G(t")
Where G(t") is the closed wavefront surface
at t". Now, if V(r) is sensibly constant within G(t"),
then:
G(r',t';r,t)[V] = S
G(r',t';r",t")[V]
. G_{0}(U:r",t";r,t) d^{2}r"
G(t")
and the following formula for G(r',t';r,t)[V] be obtained:
G(r',t';r,t)[V] = S
S G(r',t';r_{2},t_{2})[V]
. G_{0}(U(r_{1});r_{2},t_{2};r_{1},t_{1})
. G_{0}(U(r);r_{1},t_{1};r,t)
. d^{2}r_{2,1} . d^{2}r_{1}
4p;
4p;
r_{1} = v.t_{1 }r_{2,1} = v.(t_{2}t_{1})
Where r_{2,1 }= r_{2}  r_{1}.
Note that the integral with respect to d^{2}r_{2,1} is
taken over many different spherical surfaces as r_{1} varies. This
process of expanding the Full Propagator in terms of integrals of constant
potential propagators leads fairly directly to Feynman's "Path Integral"
Formulation of Wave Mechanics [R.P. Feynman "QED, the Strange Theory
of Light and Matter" (1985)]. This says that the Full Propagator is
the simple sum of an infinite set of partial propagators, each of which
is calculated as a series of propagations along an arbitrary trajectory.
From the form of the constant potential propagator, for any frequency component,
each step contributes a phase factor of e^{i
(qw/v)r},
where r is the length of the step and v(w) is
the local group velocity, dw/dq. This latter
quantity depends, as do both q and w, on the
local potential. As already remarked, each of the G_{0} factors
has to be constituted as an energy integral of Hankel functions. The magnitude
of each factor in the partial propagator is determined by its denominator,
which can become very large when the phase does not change much over a
step.
This prescription for obtaining the Full Propagator is at first sight
confusing. Moreover, it might seem to be a recipe that could never be made
to work! An infinite sum of path integrals is not an attractive proposition.
However, this is where the Principle of Least Action comes to our aid.
Only a tiny subset of all the partial propagators are at all significant.
Most are incoherent and cancel out with each other: exactly as do the phasors
from the edges of an aperture in a Fresnel Integral. Only those partial
propagators that are calculated along bundles of paths for which the total
phase factor is stationary with respect to variation of those paths make
any sensible contribution to the Full Propagator. These partial propagators
align with each other and add up coherently. All others have random phases
and so tend to "curl up" and cancel out.
Hence:

The Full Propagator is more or less the sum of a very small number of partial
propagators.

Each partial propagator is more or less equal to a single path integral
of phase factors.

The path taken is such that small variations in that path have no effect
on the phase factor.

This can be interpreted as saying that

Propagation occurs along every conceivable trajectory: not just straight
lines!

Significant propagation occurs only along those trajectories for which
the phase factor is stationary.

If there exists more than one such trajectory, then interference will occur
between these paths.
In the case of light, minimum phase always means nil phase, as the proper
time interval along any real light trajectory is always zero. All deviations
from "the light cone" involve light travelling faster than its observed
speed in free space, and although such travel is not impossible, the partial
propagators that these trajectories give rise to add up incoherently and
can be ignored for all practical purposes.
An additional insight arises from observing that the argument of the
phase factor, (qw/v)dr = 2p(
p.v
 E )dt/h. For the overall phase factor to be stationary,
so must the timeline integral of the term "p.v  E",
where
p is momentum, v velocity and E the total energy. Now,
in particle terms: p = mv, E = T + U, and T = (p.v)/2.
Hence:
(p.v  E) = 2T  ( T + U )
= T  U
the Action! So if one could only somehow link the propagation of a wave
with the trajectory of a particle, one would have a coherent rationale
for Lagrange's Principle of Least Action!
Liouville's
Theorem: Causality and Determinism
Phase Space
This is a generalization of ordinary space. Just as single particle's position
can be represented as a three component vector function of time: so the
configuration of an Nparticle system can be represented by a single 3Ncomponent
vector function of time. This vector exists in a 3NDimensional Vector
Space: R_{3N}. Moreover, the Nmomenta of the many particle system
can also be represented by a second 3Ncomponent vector. This vector exists
in a 3NDimensional Vector Space: P_{3N}. Finally, it turns
out that often integrals in mechanics have to be taken over "all space"
(or some specific volume of space) and "all momenta" (or some specific
volume
in momentum space). The concept then arises of the 6Ncomponent State
Vector, which exists in the product space Q_{6N} = R_{3N
}x
P_{3N}.
Accessibility
The concept of adjacent
is more sophisticated in Phase Space than in Geometric Space. If a system
in a state Q has a momentum, P, then (excepting the possibility of some
collision taking place) its sole next state is Q'
= Q + dt
. M^{1} P . This
is forced on the system by the very meaning of momentum: the temporal rate
of change of position. Of course, next is ambiguous as to the sign of the
time increment. Hence, although a state Q may have a myriad of nearest
neighbours, generally speaking (in a Newtonian system) only two of these
will be accessible to it. These are its determinate past and determinate
future. Note, however, that in order to deduce this we have had to involve
the inverse Mass operator. If this turned out to be a more subtle object
than the diagonal matrix implied here, then a formalism is available that
is open to behaviours richer than conventional Newtonian mechanics. As
we have already seen, a discrete spacetime is incompatible with particulate
matter, so it is implausible
that M^{1} is of Newtonian form.
Predictability
Phase Space is a particularly important concept because of Liouville's
Theorem. This states that a set of identical systems launched with similar
initial conditions: such that all lie within a compact and simply connected
region of phase space, evolve in such a way that their state vectors always
lie in a simply connected region of phase space with the same volume.
Mechanistic Determinism
This would seem to implies that the uncertainty in the final state of a
system is always exactly equal to the uncertainty in its initial or launch
conditions, and therefore that the accuracy of a prediction can always
be improved by tightening the tolerance of the initial conditions. This
would amount to Mechanistic Determinism. However, this conclusion is not
quite
valid. Consideration of the case of a single particle is sufficient to
elucidate the point at issue.
Scrambling Space
Initially, the simply connected region of phase space might be a 6D hyper
sphere. This is maximally compact: it has the minimum hyper surface for
the hyper volume that it contains. Now, as time progresses, this hyper
volume might become at first oblate then greatly elongated until it eventually
became a hyper tube. This is entirely compatible with Liouville's Theorem.
Although its hyper volume at all times remains the same as that of the
initial hyper sphere, the length of the shortest path (never leaving the
hyper volume) between some pairs of points within such a hyper tube might
tend to infinity. The uncertainty in the momentum and position of the single
particle would then diverge: in principle to infinity!
The behaviour of a system can be more subtle than this. The hyper tube
might become entangled with itself, such that the total volume that it
threads through doesn't grow with time: it remains more or less as compact
as the original hyper sphere. This would represent a "scrambling of phase
space". Points that were adjacent at one time becoming fairly remote from
each other at a later time.
Causal Indeterminism
Liouville's Theorem sets no limit on the degree or rate at which "Phase
Space Scrambling" can occur. In fact, for linear systems it doesn't occur
at all, and they exhibit the following behaviour:
Lim
{ dQ(t) } = 0
dQ(0)>0
for any value of evolution time, t. In words: "it is always possible
to improve the accuracy of a prediction by tightening up on the specification
or tolerance of the initial conditions".
On the contrary, nonlinear systems can grossly violate this kind of
limiting behaviour. They can exhibit causal Indeterminism. While every
step of their trajectories is governed by a definite objective nonprobabilistic
causal law, the final outcome of this lawful behaviour cannot be inferred
from initial conditions: even exactly specified initial conditions! Two
initial conditions that are indefinitely similar can give rise to two indefinitely
dissimilar outcomes. I have mentioned elsewhere the case of the magnetic
pendulum, in which infinitesimal variation of initial conditions can result
in macroscopic variation of outcome! This corresponds to an infinitely
rapid scrambling of phase space, if only on an infinitely small scale length.
As pointed out by Prof. Landsberg in his book "Seeking Ultimates",
this scrambling of phase space is associated with the general increase
in "coarse grain" Entropy required by the Second Law of Thermodynamics.
Discrete Space
If space is not continuous, but rather discretized into some kind of grid:
as has been presumed in these discussions up to now, then it would seem
that determinism is recovered. This is because there is then a definite
limit to the variation in initial conditions. In Newtonian Mechanics, it
would seem that each definite grid point in phase space must have
a definite trajectory: an inevitable sequence of uniquely
accessible
points.
However, as I have
already
remarked, a quantized spacetimemomentumenergy grid is incompatible
with a particle theory of matter. The only particles are particles
of spacetime. For matter to have coordinated motion on a quantized grid,
it must either be "guided by Angels" or be constituted of waves. If one
rejects the former hypothesis, one is left with the conclusion that even
if one started off with a "particle of matter" this would disperse in accordance
with the wave propagation laws of the spacetime grid. In other words, M^{1}
does not have Newtonian form. Moreover, and more important,
it is misleading to focus on such an unrealistic case. In general, a singleparticle
initial condition would have to be presented as a
set of amplitudes:
one for every spacetime grid node over some 3D hypersurface! The
freedom to independently determine these amplitudes recovers the continuity
of initial conditions that was at first lost by the discretization of spacetime
. In summery:

If space is continuous, then

Matter can be particulate.

In which case, its initial conditions will be local but continuous

and so indefinite.

If space is discrete, then

Matter can not be particulate.

In which case, its initial conditions will be quantized but global

and so indefinite.
Efficient and Final Causes
St Thomas Aquinas enumerated a number of types of cause of an effect:
in particular, the efficient and final causes. The efficient cause
is the influence that pushes or pulls the effect into being or from where
it obtains its being. The final
cause is the purpose, objective, rationale or motivation in view of
which an effect is brought into being. So, the efficient cause of a child
is the sexual congress of its parents and the final cause of that sexual
congress might well have been their desire to become parents. What, then,
caused
what?
Final causes are not difficult to delineate in biology.
Evolution is driven by them. A gene becomes prevalent in a population if
it facilitates the survival of a species. The final cause of that gene's
persistence is the good of the species. It is almost as if it came into
being in order to help. The fact that its origination was entirely fortuitous
and without any motivation does not contradict the idea that, from the
very moment of its existence, the continuance of its being was supported
and caused by this finality.
Elementary physics deals only with efficient causes. These are called
forces, and are conceived to have effect only forwards in time. Hence,
when I first came across St Thomas' concept of "final cause" I reacted
to it with antagonism. Subsequently, I have come to understand my error.
As should already be clear from the beginning of this paper, the more advanced
Lagrangian formulation of Classical Mechanics is couched in "final cause"
language. All trajectories are determined by the finality that their Action
is stationary: minimal or maximal. Moreover, when an intense field decays
into matter (as in Hawking radiation:
with its rough semiconductor physics analogue: thermal generation of mobile
charge carriers in a depletion region) those virtual fluctuations that
gain substance are selected for by the finality that they become real.
This is a rough physics analogy to the type of final causation I have just
sketched out for biological systems. Similarly, nonlinear systems can
exhibit quasirepetative behaviours that other trajectories tend to converge
on. These are called "strange attractors", and play the role of formal
objectives that material realities feel their way towards.
Causality and Time
These examples demonstrate that it is legitimate to speak in terms of finality
even in the context of physical science, so long as one clearly keeps in
mind what one means. Indeed, the idea of efficient causality is not entirely
devoid of problems. Consider the interaction between two things, preceded
and followed by free
space propagation. It is natural to say that the deviation of the post
interaction trajectories from the pre interaction trajectories is caused
by the interaction. This is compatible with a normal understanding of cause
and of force. However, one could just as easily say that the deviation
of the pre interaction trajectories from the post interaction trajectories
is caused by the interaction. This is just as true. It suggests that the
pre interaction trajectories were necessitated by the interaction rather
than the interaction necessarily followed from these trajectories.
Mathematically, these and all similar statements are equivalent. They
amount to no more than the rearrangement of elementary equations. The
ambiguity I have highlighted is associated with the fact that both Classical
and Quantum Mechanics are time reversible. There is no place for temporal
succession in microscopic mechanics. In this context, causality is local,
instantaneous and reversible. It does not admit of before and after, but
only of difference.
Causality and Contingency
Contingency: the "it could have been otherwise" of everything, has little
to do with time sequencing. St
Thomas' proof of God's Being  the idea that all being and becoming
(or movement, as he designates it) originates in The Unmoving Prime
Mover, The Perfect and Selfsufficient Being is not dependent either on
the Cosmos having a "first time" or being a system that is "running down"
from a low entropy state to condition of ultimate disorder. Prof. Landsberg
is quite mistaken here. St Thomas' proof is much more an interpretation
or inference from the theme of incompleteness that runs through the book
"Seeking
Ultimates".
Cause and Will
Our naive concept of cause is time bound, being based on our experience
of memory, personal
decision making and willing:

I decide to clench my fist,

I will this to be so and,

lo and behold, my fist becomes clenched.
Temporal succession
In fact, our experience of temporal succession is not as simple as this.
Only today, 8th July 2003 AD, I experienced a deep and somewhat unsettling
uncertainty as to whether a certain idea: "my partner going for a dental
checkup", was a memory of an event that had taken place in the recent
past (last Saturday) or an expectation that it was about to occur (next
Saturday). Only after careful consideration of other ideas with which
it was related, in particular that this appointment affected "when a friend
could be invited to visit us" did I conclude that it was a memory from
the past: and only then because I found that my mind contained other vivid
ideas relating to occurrences associated with the visit of said friend.
Only if these ideas were all fantasies about what might happen during the
visit could it be that this visit had not yet taken place, and believe
me the ideas were not the kind of thing that any fairly normal person fantasizes
over!
Memory, Information and the Null State
I suggest that the directionality of our memory and hence our experience
of the flow of time is exactly equivalent to the statement that at one
end of our life history our embryonic brain is "pristine": its neural structure
and form largely determined by innate genetic and physiological developmental
forces.
These have nothing to do with experiential sense data or the epistemological
development of concepts and models to represent the outside world of objective
reality. They are characteristic of a system which is intellectually
closed and interacts with its environment only by the physical exchange
of chemicals and heat, rather than at the synthetic level of sensory stimulus,
still less the abstract level of mental conceptualization.
When a person dies (I except those who suffer degenerative brain conditions),
their brain is altogether more interesting and difficult to characterize:
it contains more information. Whereas it is easy to unpick a sequence if
the first term is known: when one has a firm foundation, it is simply impossible
to interpret a complex of information when the reference point, the "null
state" is mysterious. It is like trying to understand a coded message when
one does not have access to the code. This is why we can recall what happened
before a certain moment in time but not (generally, at least) recall
what happens after that same moment.
The brain/mind changes as the person experiences and learns in such
a manner that each modification builds on those temporally adjacent to
it. That statement was carefully phrased so as to be temporally ambiguous.
The ambiguity is resolved when one notes that in one temporal sense
the complexity of the brain's neural form decreases and in the other it
increases. Somehow, I suggest, memories are stored in the brain/mind as
differences/changes in pattern/form/organization. At the extremity of a
person's cognitive life that corresponds to physiological conception, the
first
memory is easily distinguished from the pristine neural state. All others
overlie it, tending to complicate, distort and obscure it. If at this initial
extremity of life one had knowledge of the state of one's brain/mind at
the other extremity of one's life, then memory
all that is going
to happen in the future would be possible. This would be achieved by a
process of unpicking all the changes between the present state of one's
mind and its terminal state. However, no such knowledge is possible.
By contrast, immediate and ready knowledge of the initial state of one's
brain/mind is possible at all points in life: it is the null, uninformed
state corresponding to a blank sheet of paper. Hence, memory can readily
work backwards in time (towards one's conception), but forwards in time
(towards one's death) not at all.
This account explains why when one understands something for the first
time it often feels as if one has remembered something that one always
knew but had somehow forgotten. One has simply attained a mental state
that one was always going to attain: it is simply part of the life sequence
that makes up one's life. What is before and after any present, from an
objective point of view, is symmetric: only from the partial and subjective
point of view is it asymmetric. It is this fact, and the psychological
experience that weakly mirrors this fact that gave rise to Plato's theory
that all learning and discovery and mental growth was a kind of remembering
or recovery of what had been lost at birth.
Singularities in Newtonian
Physics
A perfect pin subject to a gravitational field,
even balanced exactly on its point, is always falling. The slightest deviation
from the vertical grows exponentially with time. However, if it is exactly
balanced, it will take an infinite time to topple over: equal times produce
equal mutiplications of the angle of inclination. Of course, in the real
world, exact balance is impossible. Even were it to be achieved, thermal
fluctuations would disrupt it. Even at the absolute zero of temperature
quantum fluctuations would provide an impetus. A similar analysis applies
to a particle sliding without friction on a surface y =  ½
x^{2}.
The accelerating force is given by f = g . dy/dx
= g x, where g is the local strength of the gravitational
field. Hence d^{2}x/dt^{2 } = g.x
and .
Continuous Discontinuity
A more interesting case is provided
by the surface:
y = (2/g) . x^{2 }. ( ln x )^{3}
: x e (0,1)
A particle sliding without friction on this surface from rest at
x = 0 follows the trajectory:
x
= exp(  1/t^{2} )
This can be shown as follows:
t^{2}
=  1/ln(x)
dx/dt = v = 2.x /
t^{3}
½ m v^{2} = 2m.x^{2 }/
t^{6} =  mg.y
= 2m.x^{2 }. ( ln x )^{3}
This surface is interesting. It is smooth (everywhere continuous in all
its derivatives) for real x, although not analytic for complex x. While
its gradient along the real axis is zero at x=0, it nevertheless deviates
from zero indefinitely quickly over an infinitesimal interval next to the
origin. This means that the exponential growth constant for an infinitesimally
small deviation from zero is infinitely large. This in turn means that
a particle sliding without friction on such a slope would be seen to move
a macroscopic distance in a finite time, even if x_{t=0
}=
dx/dt_{t=0} = 0.
The trajectory is more interesting still. As I have already implied,
it has the property that d^{n}x/dt^{n}_{t=0}
= 0, "n. Hence, it smoothly
continues the trajectory x(t<0) = 0.
This
means that the particle could remain stationary at x = 0 for
an indeterminate period of time: after all, the force acting upon it in
this position is exactly zero, then without warning, impetus or efficient
cause, accelerate towards x = 1.
Here we have the familiar paradox of the toppling pin writ large. Left
to itself, when will the particle begin to slide? Its eventual, and
it would seem inevitable, motion is without efficient cause.
Nothing in its past indicates or predisposes its future motion. The trajectory
we have delineated, though causal, is fundamentally spontaneous.
Of course, this is no practical problem. For the reasons given above
in connection with the balanced pin, the particle can never be at exact
rest at the brow of this mathematically curious hillock. Nevertheless,
it shows that even causal Newtonian mechanics can exhibit spontaneous Indeterminism
in the simplest of circumstances. Any system which had similar characteristics
is free to exhibit spontaneous behaviour. It is not difficult to believe
that complex systems might have such "complementary functions" (a term
I have borrowed from the theory of differential equations: a behaviour
that requires no external stimulus) and hence manifest singularities of
causality: suddenly, for no reason acting in a surprising manner. In fact,
all of us are familiar with such behaviour. Engineers call such manifestations
"gremlins" or "glitches", and dismiss them as "one of those things".
Transubstantiation
Incidentally, this kind of singular causality offers a clue to the problem
as to how one thing can become another. For an Aristotelian, all change
is "transubstantiation" and involves the discontinuous destruction of one
substance, immediately followed by the creation of another. To a Physicist,
accustomed to the ubiquity of differential equations and continuity, such
language is offensive. Even the singularities of catastrophic transformations
and phase transitions are smoothed out by inertial or statistical effects.
The Platonic
analysis of a continuous change in a thing's participation in a variety
of forms: "a state's projection onto a Hilbert basis" is much more congenial.
Nevertheless, the problem of how any radical change can take
place (e.g. from living to dead or vice versa) if all change is gradual
still niggles. The best examples of such discontinuous changes arise in
connection with the sacraments. In all except matrimony, an identifiable
objective change is effected. In the case of the Eucharist,
bread and wine become the Body and Blood of The Lord; in Baptism,
Confirmation
and Ordination an indelible character is imprinted on the soul of the recipient
of the sacrament; in
Penance
and Unction forgiveness and healing are effected. Equally, if consecrated
bread decays sufficiently as to cease to represent food, then its participation
in the Divine Form ceases and it becomes rotting starch and gluten.
Such radical transitions can be reconciled with continuity if the degree
of participation in a form has a temporal dependence similar to exp(1/t^{2}).
As the priest starts the process of consecration, the participation of
the Eucharistic Elements in the Divine Form starts to increase from zero,
but with no discontinuity. If the consecration is successfully concluded,
this participation converges to precisely unity: if the consecration is
aborted, it falls back to zero. Similarly, the participation of a sick
person in the form of "health" suddenly starts to improve, but with no
discontinuity, when an effective medicine is ingested.
Classical and Quantum Systems
There are two types of objects. The first is identified by the matter
that composes it. The second is delineated either by the neighbourhood
that it occupies, or the form that it represents. In common experience
specification by matter and form are interchangeable: certain matter occupies
a certain neighbourhood with a certain pattern. In fact they are not at
all equivalent. To make this clear, I shall now consider the two possibilities
in some detail.
Materially closed or "What" systems
A system might be a definite set of particles, with a certain set
of spatial coordinates and momenta. Quantum mechanically, these particles
may be indistinguishable (in the way that two reef knots tied on the same
piece of string are indistinguishable: they can only be told apart when
they are spatially remote) but still their total number is known and their
wavefunction has a definite set of (interchangeable) arguments. Such
a system can not have any spatial bounds. It has a nonvanishing amplitude
associated with its constituents being located indefinitely remote from
the supposed location of the system. Any Quantum system that is conceived
of or expressed in terms of a definite set of coordinates (a Wave
Function: the analogy of the State Vector for a Classical system) is necessarily
non local. In my experience, Quantum systems are always conceived of
in this way: as sets of particles. Sometimes the number of particles is
allowed to vary, but even so the system consists of exactly however many
particles that it does: wherever each of these may be.
Spatially closed or "Where" systems
Alternately, a system might be a definite neighbourhood of space.
In which case it cannot be described in terms of a finite set of arguments.
This can be just as true classically as Quantum Mechanically: though it
is perhaps more unavoidably true once Quantum Mechanics is taken account
of.
A classical example
Consider an iceberg floating in the Arctic. The concept of iceberg relates
to those water molecules that compose the continuous solid mass of ice.
Over time, this is not a fixed set. Many water molecules break away from
the burg and become constituents of the liquid sea, while other molecules
from the sea attach themselves to the crystalline ice. Although these processes
should properly be described in quantum terms, they are not quantum processes.
Over time the iceberg may shrink in volume, grow or remain static. It will
certainly change its shape. At all times, its identity will be clear
as "the mass of solid ice". Obviously, it may accidentally split or merge
with another floe, but such events are exceptional and can always be dealt
with by redefining the object we are considering to be the totality of
ice floating in the Arctic. Nevertheless, it cannot be described in
terms of a definite set of coordinates and momenta, because the molecules
composing it are not a stable set. Even if, by chance, the iceberg always
had exactly the same number of molecules, it would not be legitimate
to specify a manybody wavefunction in terms of that number of position
vectors. Manifestly, everyday objects are always conceived of and dealt
with in this way.
A quantum example
For systems where dynamical interchange of material is insignificant, the
distinction is still valid. When considering the small disk of copper nickel
alloy in my hand, I would quite properly at any time exclude from the concept
of "penny" all electrons and nuclei that were then observed to be remote
from the apparent surface of the coin. Any simple wavefunction that I might
write down in an attempt to describe the penny can at best be a description
of a larger world in which a penny exists. It will necessarily
describe a world in which, as well as the penny, occasionally stray electrons
and other material objects are observed remote from the penny. It is
quite impossible to write down a simple wavefunction for a penny.
A fundamental difficulty
The difficulty encountered here in specifying a spatially definite system
is profound. While it is not characteristic of Quantum Mechanics, it is
nevertheless exacerbated by its intrinsic nonlocality. One would like
to be able to say something like "the iceberg is the sum total of all the
matter within a certain periphery". Even ignoring the difficulty of how
the the periphery is to be established (which problem I do not think to
be important) this stipulation is problematic.
Classically, the material parts of the iceberg continually change: so
the concept solid iceberg is in fact fluid! Any prediction
for an observable of the iceberg has to be evaluated over
a conditional sum. In this sum, specific terms have to be either included
or omitted, depending on whether the molecules to which they relate happen
at the time in question to belong to that solid mass. So instead of a convenient
sum such as:
Z = S_{j=1...N} z_{j}
where "z" is some property of the jth molecule, one has to evaluate sums
of the form:
Z = S_{j} z_{j
}d(r_{j
}:
W(t))
where "r_{j}" is the position of the jth molecule, W(t)
is the region of space occupied by the iceberg at time "t" and d(r_{j
}:
W(t))
is unity when: r_{j} e W(t).
Classical and Quantum Mechanics compared
To progress to a Quantum Mechanical prescription it would first necessary
to reformulate this unwieldy classical formula in terms of a state vector
in 6NDimensional Phase Space. Sadly, this is
not in order to make it less unwieldy, but rather because nonrelativistic
Quantum Mechanics is a prescription for associating a continuous complex
amplitude (the Wave Function: whose time evolution is governed by Schrodinger's
Equation) with each point in Configuration (not Phase) Space. This contrasts
with Classical Mechanics, in which the Wave Function is a 6NDimensional
Phase
Space Dirac Deltafunction. The form of this function does not change
with time. Instead, it only undergoes local motion: a change in that point
in its domain which corresponds to its sole nonzero (infinite) value.
This local motion is governed by Newton's Laws of motion, or (equivalently)
the Lagrangian principle of Least Action. It is well known that for macroscopic
objects (with large masses and small de Broglie wavelengths), Schrodinger's
Equation reduces to Newtonian mechanics, as it must.
Unfortunately, it is far from clear how to proceed. In nonrelativistic
Quantum Mechanics, once any point in 3NDimensional Configuration Space
has been stipulated, a single complex amplitude, Y(r_{1},...r_{N};t),
is immediately obtained: for all the matter being considered together
en mass and as a single composite entity. It is neither possible to
deconstruct this amplitude, nor to obtain anything additional to it. Y(r_{1},...r_{N};t),
the Wavefunction, is the beginning and the end of Quantum Mechanics. In
particular, it is not possible to distribute Y(r_{1},...r_{N};t)
between the matter lying within some boundary and other matter lying beyond
it. Y(r_{1},...r_{N};t)
relates to all the matter that is being considered as a single thing.
Note that Y(r_{1},...r_{N};t)
is a function of (r_{1},...r_{N}) only, not
(r_{1},...r_{N};p_{1},...p_{N}).
Its doman is 3NDimensional Configuration Space. The momenta, p,
are not arguments of Y, but can rather
be obtained from it as p_{j} = dY/dr_{j}.
This type of relationship is foreign to classical mechanics: as we have
already noted, the classical "Wavefunction" is a deltafunction in 6NDimensional
Phase Space.
Really, it is illegitimate to consider Y(r_{1},...r_{N};t)
for any subset of matter, because as soon as matter other than that governed
by any partial formulation of Schrodinger's Equation is conceived of, the
question arises "how do you know that this other matter is not somehow
subtly crucial to your calculations?" I believe this to be the basis of
"The Collapse of the Wavepacket", which is always associated with the interaction
of a definite "experimental system" with another distinct and indefinite
system: typically called "the observer". If I am correct, this most confusing
phenomenon will be revealed as of no deep epistemological significance,
but only an artefact of our current formulation of Quantum Mechanics.
Of course, if it is possible to represent some Y(r_{1},...r_{N};t)
in terms of determinant(s) of singleparticle wavefunctions (as is done
for the ground state in Density Functional Theory, and more generally in
the various Determinental Expansion techniques of Quantum Chemistry), then
each of these can be spatially truncated, and the manyparticle wavefunction
for a region of space be written down. This would have to include an infinite
number of (vanishingly?) small contributions from particles that are "normally
remote" from the region, however!
It is now clear that the "wavefunction for a spatially defined object"
is of the form:
Y(r_{1},r_{2},r_{3}.....r_{oo};t)
; r_{j }e W
i.e. a single complex amplitude defined on a domain of an infinite number
of threedimensional spatial coordinates. If all of these coordinates lie
outside W, Y takes
the value zero.
The number density for such a wavefunction is:
[N1]
n(r_{1})
= S

Y(r_{1},r_{2},r_{3}.....r_{oo};t)
^{2} d^{3}r_{2} ....d^{3}r_{N}
oo
Note that it is neither possible to calculate n(r_{1})
within W; nor to normalize Y,
without
making reference to its behaviour indefinitely remote from W.
Quantum Indeterminacy
I do not wish to dwell on this topic. Some have proposed that the probabilistic
character of conventional Quantum Mechanics represents a certain openness
to Free Will and so the Human Spirit and a relaxation of the rigidity of
Newtonian Mechanics. I think that this is misconceived:

Newtonian Mechanics is not deterministic,
even though it is causal,

Our notion of cause is linked in a confused
way with that of temporal succession.

Lagrangian Physics features
final causes as much as efficient causes.

Randomness (which appears to be all that is on offer from Quantum Mechanics)
is not at all the same thing as Free Will.
I suspect that the idea of virtual or evanescent
events, that are "selected for" on the basis of some extrinsic finality
will have a part to play in the theory of Free Will, but as yet this is
unclear.
Conscious FreeWill
What then are the supposed characteristics of conscious FreeWill? Can
they be expressed in a noncontradictory manner? Can they be allowed for
or even incorporated in Physical Theory?
It seems to me that the Free Will is characterized by some obscure admixture
of spontaneity and deliberation [D.J. O'Conner "Free Will" (1971)].
Spontaneity; for else the will would not be free: but constrained by coercion,
circumstance or knowledge. Deliberation; for else the will is not rational:
but at the mercy of chaos, prejudice or ignorance. I have sketched out
elsewhere
how I think these seeming incompatibles can be reconciled.
Knowledge and Ignorance
Spontaneity and Deliberation
Mind and Matter
Adopting the hypothesis that the mind
(but not the spirit, person,
ego or consciousness)
is nothing more than the internal states and processes of the brain, one
can immediately identify the mind of any observer within the Minkowskian
Cosmos as an aspect of the organization of potentialities
within a certain fourvolume. Note that as soon as this is done the problem
of a diffuse and indefinite consciousness evaporates. Although individual
particles can be thought of as exploring every possibility, and having
diffuse lifelines, this is not true of a composite object defined in
terms of a macroscopic neighbourhood. The mind is an aspect of the
mass of all the potentialities that do objectively exist within such
a neighbourhood. The mind is not at all like a particle. It does not
have a potentiality to be something or somewhere. The mind is what it is:
a composite of potentialities. The mind is not subject to "observation".
It can never be forced to adopt some specific Eigenstate. In as far as
my mind has a wavefunction, it never collapses. Better, my mind
is
an aspect of the manybody wavefunction for the matter in a certain region
of space. The definite state of my mind is an objective state of that uncollapsed
wavefunction. There is no motive for analysing its state as being a superposition
of other "more definite" mental states in which the state of every particle
is well defined.
My definite objective mental state is itself mixed, in the sense that
it is the combination of all the objective potentialities of all the matter
that compose my brain. Though the particles that compose my brain
have small potentialities for being in positions remote from its neighbourhood,
they
are only part of my brain in as far as they are not remote from it.
Equally, a certain electron that generally speaking forms part of the Sea
of Tranquility on Earth's Moon has a tiny potentiality for being part of
my brain. To this limited extent, it is part of my brian: and its minute
contribution to the physical state of my brain is a minute contribution
to the state of my mind.
This account of affairs relies on the idea that the Wavefunction (and
so the potentialities that it represents) is objective and so represents
episteme, not just a projection of subjective knowledge: doxa. It should
be noted that no account has been given of
consciousness here. Moreover, the phenomenon of the "Collapse of the
Wavepacket" now seems even more difficult to account for. My observation
of a particle now seems to be objectively describable entirely within the
Minkowskian framework, and one would expect all subjective knowledge to
be of uncollapsed wavefunctions: contrary to all experience!
Free Will Revisited
We have seen that Free Will is not incompatible with causality, but rather
that even Newtonian mechanics is open to sponteneity. Moreover, we have
seen that Mind may have a holographic relationship with Matter: being the
sum total of all the material potentialities in a neighbourhood. We have
also seen that final causality has a proper place in Physics. It remains
only to put these three ideas together.
When the mind conceives of some complex purpose, how does it will
this objective to occur? In some cases (for example "let's make a pot of
tea") it is barely conceivable that the subconscious analyzes each practical
step that must be carried out to acheive this objective and then organises
muscular activity to effect it. In other cases (for example "let's solve
this puzzle") it is more plausible that any number of spontaneous trials
are run, with a check being made as to the success of each trial. Those
trials which fail leave no trace in the mind. They are evanescent, rather
like virtual particles in the vacuum. They may even be quantum trials in
a quantumcomputational neural net! When a successful attempt is identified,
it is accepted as valid and is memorized. The "cause" of the persistence
of this trial is that it succeeded: its finality.
S s
S
± ÷ Ø
ø
· ½ ¼
' ¬ " £ $ ^
@
[]
[]
§ ¶
a b d e f g h i j k l m n o p q r s V t u v w x
y z
A B C D E F G H I J K L M N O P Q R S T U W X Y
Z
A B C D E F G H I J K L M N O P Q R
S T U V W X Y Z
&
£ $ % ^
& @
# £
$ % &
$ ^ *
s v w x y
a b c d e f g h i j k
% ^
_
z
[ ] ` ¬
R S T
l t u
£
J K L
U V W X Y Z
abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ
`1234567890=[]'#,./
¬!"£$%^&*()_+{}:@~<>?
Bibliography

Undergraduate Physics

R.P. Feynman, R Leighton and M. Sands "Lectures on Physics" (1964)

K.R. Symon "Mechanics" (1971)

R.P. Feynman "QED, the Strange Theory of Light and Matter" (1985)

Graduate Physics

A. Einstein "The Meaning of Relativity" 6th Edition (1956)

L. Schiff "Quantum Mechanics" 3rd Edition (1968)

P. Strange "Relativistic Quantum Mechanics" (1999)

Academic Philosophy

Plato "Timaeus"

K.R. Popper "Conjectures and Refutations" (1972)

K.R. Popper "Quantum Theory and the Schism in Physics" (1982)

D.J. O'Conner "Free Will" (1971)

Popular books on Physics, its Philosophy and related Theology

J. Powers "Philosophy and the New Physics" (1982)

J.C. Polkinghorne "The Quantum World" (1984)

S. Hawking "A Brief History of Time" (1988)

J.D. Barrow "Theories of Everything" (1990)

P. Davies and J. Gribbin "The Matter Myth" (1991)

P. Davies "The Mind of God" (1992)

P. Davies "God and the New Physics" (1993)

P. Landsberg "Seeking Ultimates: an intuitive guide to Physics"
(2000)
Back to top.