If you've read the linked documents "The Origin of Inertia" and "Radiation Reaction", you're ready to take on transient mass fluctuations. From "Radiation Reaction" you know that electromagnetic radiative processes are accompanied by reaction forces on the radiating charges that involve violations of causality and instantaneous violations of energy and momentum conservation. To the extent that gravity is like electromagnetism then, similar processes might reasonably be expected. Since gravity, by comparison with electromagnetism, is such a weak force, however, you might be inclined to think that even if transient radiative reaction type effects occurred, they would be hopelessly smaller than our ability to detect them. After all, in electromagnetism radiation reaction forces are incredibly small in all but the most extreme circumstances. So in gravity they must be smaller still. And the likelihood of our being able to put them to practical use might seem utterly remote to you.

If we take gravity to be only those forces that are described by Newtonian gravity, then these notions are perfectly reasonable. Gravity, unless we're talking about planetary sized objects or larger, is irrelevant, and higher order effects will be absolutely inconsequential. But, as explained in "The Origin of Inertia", gravity in general relativity theory, isn't really like Newtonian gravity. While the effects of Newtonian gravity are all replicated, and some small (now carefully observed) effects are predicted, all inertial reaction forces are gravitational forces too. Inertial reaction forces, compared to our usual notion of gravity forces, are enormous. We see, thus, that gravity in the guise of inertial reaction forces produces markedly larger effects even than electromagnetism. So it's arguably realistic to expect that higher order radiative reaction type inertial effects may be detectable, even practical.

Moreover, even plain old inertial reaction forces are radiative. They have the inverse first power of the distance and acceleration (rather than velocity) dependence that are the hallmarks of radiative interactions. Indeed, being instantaneous (nonlocal) interactions with the most distant matter in the universe, ignoring the signature of electromagnetic radiation reaction forces, lowest order inertial reaction forces already have the imprint of radiation reaction.

The reason why inertial forces are so large is that the
gravitational potential due to all of the matter in the universe (that is,
roughly *GM*/*R*) is stupendous, about equal to the square of the
speed of light. This potential is a coefficient of inertial forces. And the
potential couples to matter in the form of gravitational potential energy as a
source of the gravitational field -- the field is nonlinear. So it can't be
scaled away by a gauge transformation. [See P.C. Peters' outstanding article on
this subject in the *American Journal of Physics*, **49**, 564-569
(1981).] Even if the electrical potential coupled to electric charge as a source
of electrical fields -- which it doesn't -- it would be inconsequential. Because
there are essentially equal amounts of positive and negative electric charge,
both the scalar and vector potentials of the electromagnetic field computed
almost anywhere are very nearly zero. Even if they weren't, say in the dome of a
charged Van deGraaf generator, since electrodynamics is linear, the large scalar
potential can be eliminated by a gauge transformation of the "first kind" [a
simple global rescaling of the potential by an additive constant]. So there's no
electromagnetic analog to inertial reaction forces notwithstanding that they
proceed from field equations that are formally equivalent at "linear order".

To talk about matter and its interactions with "fields" we're
going to require some mathematical formalism. It's one of those unavoidable
things about this business. In particular, in addition to the formalism that
we've already encountered in "The Origin of
Inertia" and "Radiation
Reaction", we're going to need an "operation" called the "divergence". If
your background is electrical, you'll probably recall that the divergence is
found in Maxwell's equations for electromagnetic fields. If it's mechanical,
you'll probably remember it as part of the "equation of continuity", the
fundamental equation describing the behavior of fluids. And it's essential when
discussing "conservation laws". The divergence operation consists of taking the
scalar (or "dot", or "inner") product of the "gradient" operator and some vector
function (or "field") specified in some region of space. Choosing some
hypothetical vector ** A**, its divergence (in Cartesian coordinates)
looks like:

This operation is important because of a theorem proved by
Gauss. That theorem says that if you have some physical quantity that can be
represented by a vector field, then the "flux" of that field through a
two-dimensional surface, summed up over some closed surface, is just equal to
the divergence of the vector field in each little enclosed volume element summed
up over the volume enclosed by the closed surface. (Strictly speaking, the
"flux" of the vector field is the *net* amount of the field passing through
an element of surface. Oh, and only the part of the field perpendicular to the
surface counts.) Since the net flux of a field through a closed surface is
proportional to the amount of "sources" (or "sinks") that create (or terminate)
the field inside the surface, it follows that the divergence of the field,
point-by-point, is proportional to the density of the sources of the field
point-by-point therein. This is illustrated in the Figure below.

The other bit of formalism we're going to need is: Do things
correctly from the point of view of relativity theory. That means we're going to
have to do things four-dimensionally instead of three-dimensionally. We won't be
looking at any circumstances that involve speeds anywhere near the speed of
light, but the effect we'll be looking at follows from relativity theory
nonetheless. As far as vectors are concerned, we'll have to use "four-vectors"
appropriate to the four-dimensional "spacetime" of relativity theory instead of
"three-vectors" of ordinary three-dimensional space. Likewise, instead of the
three-dimensional divergence, we'll have to use the "four-divergence".
Symbolically, the four-divergence (of some four-vector ** A**) looks
like:

All we've done here is put in the time dimension on an equal
footing with the three space dimensions. (To do this, we have to convert times
into the units of measure of distance. This is done by multiplying all times by
the speed of light. And to accommodate the *fact* that the speed of light
is measured by all observers to be the same number everywhere and everywhen, the
time dimension has to have the opposite sign of the space dimensions. This makes
spacetime "pseudo-Euclidean". All of the weird consequences of relativity theory
trace their origin to this property of spacetime geometry. [If you've not yet
gotten an handle on relativity theory, among the plethora of books on the
subject, some of which are very good, I recommend Herman Bondi's,
Relativity and Common Sense. He does the whole schnitzle with
nothing more than algebra.])

If you're not into relativity theory you may be inclined to
think that this four-dimensional stuff -- four-vectors and four-divergence and
the like -- is just a lot of high tech mumbo-jumbo. But it's actually rather
elegant and fairly simple. For example, consider the flow of some incompressible
fluid. It has some matter density and, because of
its motion, momentum density . Aside from the
fact that the momentum density depends on the matter density, normally we don't
think of these two things as being intimately related. In the four-dimensional
world picture, however, because space and time are intimately interconnected, so
too are matter and momentum density. Indeed, they are the components of a
four-vector: the matter four-vector, denoted ** T**. (Actually, to be
completely general here we would have to let

(The 1/*c* coefficient of the momentum density, , is
included to make the dimensions come out the same for all of the components of
the vector. Theoreticians who work with this stuff regularly usually employ
"natural" units where the value of *c* is set equal to one so it disappears
in all these equations. Tempting though this is if you're doing a tedious
calculation, I prefer to carry the *c*s along to keep the magnitudes of
things in proper perspective.) If we now compute the four-divergence of the
matter four vector, we recover:

The stuff in the square brackets you'll probably recognize as the "equation of continuity" of hydrodynamics. So we see that requiring the four-divergence of the matter four-vector to vanish is the same as demanding that:

This is just the usual expression for the "conservation of matter". So taking four-divergences leads to the sort of results we should expect.

Confident that taking four-divergences of four-vectors is a reasonable thing to do, we're ready to proceed. In addition to the formal machinery that we've just considered, we need to spell out explicitly the assumptions we're making to get the effect we're interested in. The effect is predicated upon two essentially universally accepted assumptions:

1.

Inertial reaction forces in objects subjected to accelerations are produced by the interaction of the accelerated objects with a field -- they are not the immediate consequence only of some inherent property of the object.And2.

Any acceptable physical theory must be locally Lorentz- invariant; that is, in sufficiently small regions of spacetime special relativity theory (SRT) must obtain.

The first of these assumptions amounts to asserting Mach's principle, unless we're prepared to believe that some sourceless field (like quantum vacuum fluctuations) exists everywhere ready to create inertial reaction forces whenever anything is accelerated. The second assumption is merely the formal statement that relativity theory is right.

Now we ask: In the simplest of all possible circumstances --
the acceleration of a test particle in a universe of otherwise constant matter
density -- what, in the simplest possible approximation, is the field equation
for inertial forces implied by these propositions? SRT allows us to stipulate
the inertial reaction force **F** on our test particle stimulated by the
external accelerating force as:

**(1.1)**

with the
four-momentum and . From this point
on we adopt the convention that bold capital letters denote four-vectors and
bold lower-case letters denote three-vectors, **P** and **p** are the
four- and three-momenta of the test particle respectively, is the "proper"
time of the test particle (that is, the time measured in the frame of reference
in which the test particle is instantaneously at rest), *v* the
instantaneous velocity of the test particle with respect to us, and *c* the
speed of light. is the "proper"
or "rest" mass of the test particle -- the mass as measured in the frame of
instantaneous rest. is the "Lorentz
factor" that figures into relativistic effects when velocities approaching that
of light are involved. Equation (1.1) is nothing more than the special
relativistic generalization of Newton's second law of motion: four-forces are
just the rate of change of four-momenta in proper time.

Although it's not necessary, we can simplify things
considerably if we specialize our considerations to the frame of instantaneous
rest of the test particle. In this frame we can ignore the difference between
"coordinate" time (that is, the time measured in some arbitrarily specified
frame of reference) and proper time because they are the same, and we can ignore
since they are equal to one because *v* is equal to zero. (We won't recover
a generally valid field equation this way, but that's not our objective.) In
this frame Equation (1.1) becomes:

**(1.2)**

**f** is , that is, the
normal three-force of everyday experience. Since we seek the equation for the
field that produces **F**, we can't use Equation (1.2) as it stands. The
field that produces **F** should be independent of the particular inertial
reaction force our test particle with its specific mass experiences. It
should be expressed as a force per unit mass that becomes a force when
multiplied by the mass of an interacting particle. So, to get the field strength
that corresponds to **F**, we have to "normalize" **F** by dividing by
.
Defining , we get,

**(1.3)**

We are using bold italic letters now to distinguish "field" quantities from non-field quantities.

To recover a field equation of standard form we have to do another thing to Equation (1.3). It's customary to express fields in terms of the density of their sources rather than discrete particle sources. We make this accommodation by letting the test particle have some small extension and a proper matter density . Substituting for , Eq. (1.3) becomes:

**(1.4)**

Now from SRT we know that , being the proper energy density, so we may write:

**(1.5)**

This is our four-vector equation for the field strength that
produces the inertial reaction force **F** that the test particle experiences
when we apply an external force to accelerate it.

To transform a field strength into a field equation, we must
operate on the field strength to express it in terms of its local source density
(which may be zero, as in empty space). We do this by taking the four-divergence
of ** F** getting,

**(1.6)**

We write the source density as , leaving its
physical identity unspecified for the moment. We can further simplify Equation
(1.6) by noting that ** f** is irrotational in the case of our
translationally accelerated test particle (that is, the curl of

**(1.7)**

Up to this point we've just been doing some pretty straight-forward manipulations. We've only invoked SRT and some unexceptional methods of creating field equations. We've used to move back and forth between equivalent mass and energy representations for matter. But, in light of SRT, that's rather humdrum business. The really heavy stuff comes in now. The fact that and a second derivative with respect to time occur on the left hand side of Equation (1.7) tells us that we should be looking for a "classical wave equation" for the scalar potential . (The "classical" wave equation is the type of equation that the electromagnetic potentials and fields satisfy in "classical" electrodynamics. It has the neat property of "local Lorentz-invariance". That is, it describes signal propagation that's completely consistent with SRT.) , the argument of the second time derivative, however, isn't what we'd expect to find. So we ask: Is there any reasonable expression for that will allow us to recover a wave equation for ? And if there is one, what does it mean physically?

Given the coefficient of in Equation
(1.7), only one choice for will give us the
wave equation we seek: _{ }. This
choice for yields:

**(1.8)**

To keep things simple I've used a result -- that , as explained in
"The Origin
of Inertia" and below -- to cancel two terms that would otherwise appear in
this equation. If we ignore the terms of order and those
involving derivatives of , we have in
Equation (1.8) the usual wave equation for in terms of a
source charge density . Since is the
potential of a field that acts on all matter in direct proportion to its mass
and is insensitive to direct interaction with all other types of charge, it
follows that the source of must be mass.
That is, , *G* being the Newtonian constant of gravitation. Thus the field
that produces inertial reaction forces is the gravitational field, as expected
in general relativity theory (GRT). And the physical meaning of our choice, _{
}, is clear. The proper energy densities of material objects are their
gravitational potential energies that arise from the presence of the rest of the
matter in the universe. Were the rest of the matter not there, things would have
no energy or mass. So the simple assumption that inertial reaction forces arise
from the interaction of accelerated objects with an inertial field leads us
inexorably back to Mach's principle, the relativity of inertia.

Turning to less cosmic concerns, we consider the case where all terms involving time derivatives vanish in Equation (1.8). In this circumstance Equation (1.8) reduces to "Poisson's" equation:

**(1.9)**

The well-known solution for here is just the
sum of the contributions to the potential due to all of the matter in the
causally connected part of the Universe (that is, within the "particle horizon"
in the parlance of cosmologists). When calculated, this turns out to be roughly
*GM*/*R*, where *M* is the mass of the Universe and *R* is
about *c* times the age of the Universe. Using reasonable values for
*M* and *R*, *GM*/*R* computes to a value of about . Not
only does *GM*/*R* have roughly the numerical value of , it has the same
dimensions too. This seems to suggest a deep connection between and . But
where *c* is a universal local invariant -- that is, *c*, measured
locally, has the same numerical value everywhere at all times -- it appears that
should be epoch dependent, for *R* at least is a function of time. (Yes,
we've wandered back into cosmic concerns.) This difference between and
*c* is an artifact of the approximation we're using. As Carl Brans pointed
out in a really important paper back in 1962 [*Physical Review*,
**125**, 388-396], if GRT is true, indeed, even more fundamentally, if the
principle of relativity is true, then must be a locally
measured invariant, just like *c*. By the way, were our conjecture of epoch
dependence for correct, D.J.
Raine wouldn't have been able to show that Sciama's earlier argument on the
origin of inertia [see The Origin of Inertia] was true for all epochs in
isotropic cosmological models in GRT [*Reports of Progress in Physics*,
**44**, 1151-1195 (1981)].

Setting cosmic stuff aside again, we remark that in the
time-dependent case we must take account of the terms involving time-derivatives
on the right hand side of Equation (1.8). Note that these terms either are, or
in some circumstances can become, negative. It's the fact that these terms can
also be made very large in practicable devices with extant technology that makes
them of interest for rapid spacetime transport. But before exploring those sorts
of issues, we need to close the loop with radiation reaction. We do this by
looking at the leading transient mass fluctuation term in Equation (1.8) that
our derivation has unearthed. We note that this term, , since , is a
bunch of stuff times the second time derivative of an energy density. Indeed, if
we consider only non-relativistic *velocities*, we can ignore the Lorentz
factors (and derivatives thereof) that would appear in the counterpart of
Equation (1.8) derived for an arbitrary frame of reference, and this term would
become .

Invoking the notation of the discussion of radiation reaction, and ignoring the possibility that the accelerating force might drive changes in the proper internal energy of our test particle, we can write:

From this it is evident that the predicted mass fluctuations are indeed higher order radiation reaction type of effects. But they are different from the electromagnetic case, for although vanishes for constant accelerations and in the instantaneous rest frame in both cases, the mass fluctuation depends, instant-by-instant, on both and . Whereas electromagnetic radiation reaction depends only on , unless we adopt the Lorentz-Dirac equation, so while is non-zero for time averages of periodic accelerations, instant-by-instant it vanishes in the proper reference frame because is zero there. This difference leads to the transient mass fluctuation depicted in the Figure below, the counterpart of the similar Figure 2 in "Radiation Reaction". Since it displays Lorentz-Dirac behavior, it seems it's consistent with energy and momentum conservation.

Although the effect doesn't vanish for constant accelerations, even in the instantaneous rest frame, that doesn't necessarily mean that we can conserve energy and momentum locally. But that shouldn't be too surprising, for here we're dealing with an explicitly non-local interaction involving the most distant matter in the universe. When energy in the field and distant matter is taken into account, we may reasonably expect that conservation laws, instant-by-instant, aren't violated. In electrodynamics this problem is more acute because Wheeler-Feynman "absorber" theory is only one of several "alternative interpretations".

Since the expected mass fluctuation is transient, large effects can only be produced by very rapidly changing proper matter (or, equivalently, energy) densities. This means that the duration of any substantial effect will be so short that it, from the detection standpoint, can't be measured by usual weighing techniques. If, however, we drive a periodic mass fluctuation and couple it to a synchronous pulsed thrust, it's possible to produce a measurable stationary effect. Imagine that we have some capacitors to which we apply an AC voltage. The internal energy in the capacitors will fluctuate as they are charged and discharged in each cycle. This, according to Equation (1.8), will cause a mass fluctuation. If we mount our capacitors on a piezoelectric crystal, we can make the capacitors oscillate up and down while the mass is fluctuating. If we time things so that, say, the crystal is pushing up when the capacitors are more massive and pulling down when they are less massive, we see that we will produce a downward force that adds to the weight of the capacitors and crystal.

Lets make this quantitative. The leading transient term in Equation (1.8) gives a transient proper mass density:

**(2.1)**

where has been used to express the proper mass density as a proper energy density. The total transient mass fluctuation induced in the volume V of the dielectric in the capacitors then is:

**(2.2)**

In this case, since is the power
density being stored in the capacitor array at any instant, and the integral
over the volume of the capacitors is just the instantaneous power *P* being
delivered to the capacitors, the integral on the RHS of Equation (2.2) is .
Thus,

**(2.3)**

When a sinusoidal voltage of angular frequency is applied to the
capacitors, we may write . [Since *P*
is the product of the voltage and current, each of which oscillates at , its
oscillation frequency is .] Equation (2.3)
becomes:

**(2.4)**

Substitution of realistic, laboratory scale values
[*e.g.*, Watts and ]
yields mass transient amplitudes on the order of tens of milligrams. Higher
powers and frequencies yield correspondingly larger mass fluctuations.

If we drive an excursion in a crystal beneath the capacitors with an amplitude at , assume that the excursion of the crystal accelerates the capacitors only, and allow for a phase angle between and , we find for the inertial reaction force of the capacitors on the crystal:

**(2.5)**

where we've included a phase angle between the acceleration and the mass fluctuation. When we compute the product of the Sin and Cos we get another sinusoidal term that averages to zero in time. But we also get a term that's independent of time, a stationary force:

**(2.6)**

If is a few angstroms, then when , forces on the order of several dynes or more can be produced in the laboratory. The scaling to practical sized forces can, in the first approximation at least, be estimated from the formalism we've developed here. But beware. Murphy's Law awaits the unwary, seasoned professionals though they may be.

Copyright © 1998, James F. Woodward. This work, whole or in part, may not be reproduced by any means for material or financial gain without the written permission of the author.