via Scania, 551/B; 40024 Castel S. Pietro T. (BO); Italy
The study of non-bound stationary states, in quantum mechanics, introduces unnormalisable probability densities. We explore the properties of a sequence of several determinations of a random variable, having an unnormalisable probability distribution. We arrive to the conclusion that these distributions cannot have meaning.
It is known that, solving the Schrödinger equation for non-bound states, we are induced to give signification also to probability distributions which are unnormalisable. The simplest case is the problem of the free particle, that supports stationary states with probability density constant everywhere. This expresses an apparently harmless and intuitive condition, in which all the points of the space are equivalent.
At this level, the probability density of an unnormalisable distribution may be multiplied by an arbitrary constant, and its integration supply not the probability of an interval, but the ratio between the probabilities of different intervals.
A complete comprehension of these distributions requires to answer the following question: if we repeat several times an experiment that determines the value (X i) of a random variable (X) having an unnormalisable probability distribution, what aspect will the X i sequence have? Is it possible to simulate such sequence?
We observe that we cannot obtain an unnormalisable probability distribution as a limit of a normalised one. For example, a probability density constant for every x ≥ 0 is not the limit, for L → ∞, of a probability density constant for 0 ≤ x < L, because, in the limit, all the X i tend to ∞. ∞ is not a number and the possible values of X are all the numbers ≥ 0 (∞ excluded).
I have fallen in with various paradoxical properties of these distributions, before to obtain the following result.
Let n and k be two great numbers, and put N = k · n. If we repeat N times the experiment, we shall obtain a set of N values covering an interval I. We can divide I in k not equal subintervals, in such a way that everyone contains n results. f = n/N is the statistical frequency of every subinterval. Dividing f by the amplitude of the subintervals, we obtain k frequency densities. Associating, to each x, internal to a subinterval, the frequency density of this subinterval, and 0 to each x external to I, we obtain a function ρ (x), that describes the aspect of the statistical distribution of the results.
ρ (x) is necessarily normalised.
An analogous distribution, with the same ρ (x), may be obtained by an experiment with normalised probability density. ρ (x) would be an approximation of such probability density, as better as n and k are greater. Only the order of the X i, and not the final shape of their distribution, may then distinguish between normalised probability and not. But, if the X i are statistically independent, their order is merely arbitrary, and a casual permutation of them must be again an acceptable sequence. Such a permutation cannot keep memory of a particular order, but only of the shape of ρ (x), exactly as a sequence extracted with normalised probability. Therefore, an unnormalisable probability distribution would behave exactly as a normalised one! It is not possible to ascribe an unnormalisable probability density to a random variable.