User:Wvbailey/History of recursion

Source: Wikipedia, the free encyclopedia.

What recursive definition is

Notion has to do with natural numbers 1, 2, ... and, for most authors after Peano the number 0. As such need [axiomatic proof, evidence] that there exists only one sequence of natural numbers, and that these entities are "orderly".

There can be more than one type of "recursion"; the one commonly used for primitive recursion is that of Goedel 1931 [cf van Heijenoort 1967:493].

From Kleene 1952:

"Proof by induction, as a method of proving a theorm T(y) for all natural numbers y, correspondins immediately to this mode of generating the numbers (§ 7) [establishing 0 and its successors]. Definition by induction (not to be confused with 'inductive definition',...), is also called recursive definition, is the analogous method a number-theoretic function φ(y) or predicate P(y). First φ(0) or P(0) (the value of the function or predicate for 0 as argument) is given. Then, for any natural number y, φ(y') or P(y') (the next value after that for y) is expressed interms of y and φ(y) or P(y) (the value for y). Analogously, we can conclude that under these circumstances the value φ(y) or P(y) of the function or predicate is defined for every natural number y. For the two parts of the definition enable us, as we generate any natural number y, at the samed time to determine the value φ(y') or P(y')." (Kleene 1952:217)

Dedekind 1888

van Heijenoort 1967:493 introduction to Wilhelm Ackermann's 1928 On Hilbert's construction of the real numbers:

"The notion of primitive recursive function appeared, in Dedekind 1888, as a natural generalization of the recursive defintions of addition and multiplication.

The numbers in parentheses refer to the "articles" in his essay (1889) Essays on the Theory of Numbers translated into English (1909) by Behman:

"As such main points I mention here the sharp distinction between finite and infinite (64), the notion of the number [Anzahl] of things (161), the proof that the form of argument known as complete induction (or the inference from n to n+1) is really conclusive (59), (60), (80), and that therefore the definition by induction (or recursion) is determinate and consistent (126)". (Dedekind 1909:33)

Peano 1889

The "so-called" Peano axioms are well known. But the original is not so well known. Their "undefined notions" are derived by "intuition" or "experience". Peano called these:

Explanations:
The sign N means number (positive integer).
The sign 1 means unity.
The sign a + 1 means the successor of n, or a plus 1.
The sign = means is equal to. We consider this sign as new, although it has the form of a sign of logic." (Peano (1889) The principles of arithmetic, presented by a new method in van Heijenoort 1967:94)

He immediately defines a series of "Axioms", one of which (#1) is the familiar "basis" of recursion, and another (#9) other is, in his notation, the notion of definition by induction (recursion):

k ε K ∴ 1 ε k ∴ x ε N . x ε k :Ɔx. x + 1 ε k :: Ɔ. N Ɔ k.
"k" is a "class [a K]" [AND] 1 is a class. Given this:
IF x is a number [AND] x is "class" THEN
GIVEN: number "x + 1" is a "class" [AND] it is a subset of the numbers N, which in turn is a subset of [some class] k.

Assimilation into Formalism by Hilbert 1925, 1927,

In his 1925:

"Clearly, the elementary means that we have at our disposal for forming functions are substitution (that is, replacement of an argument by a new variable or function) and recursion (according to the schema of the derivation of the function value for n+1 from that for n). ¶ One might think that these two processes, substitution and recursion, would have to be supplemented with other elementary methods of definition . . . ¶ It turns out, however, that any such definition can be represented as a special case of the use of substitutions and recursions. The method of search for the recursions required is in essence equivalent to that reflection by which one recognizes that the procedure use for the given definition is finitary." (Hilbert (1925) On the Infinite in van Heijenoort 1967:385-386)

He goes on, however, to introduce "recursions [that] would not be oridinary, stepwise ones; rather; we would be led to a manifold simultaneous recursion, that is, a recursion on different variables at once, and a resolution of it into ordinary, stepwise recursions would be possible only if we make use of the notion of function variable; the function φa(a, a) is an instance of a function, of the number-theoretic variable a, that cannot be defined by substitutions and ordinary, step-wise recursions alone if we admit only number-theoretic variables9 (9This assertion was proved by W. ackermann [ [1928] ]." (Hilbert (1925) On the Infinite in van Heijenoort 1967:386).

In his 1928 address The Foundations of Mathematics Hilbert presents two "Axioms of number":

16. a'≠ 0;
17. (A(0) & (a)(A(a) --> A(a'))) --> A(b) (principle of mathematical induction.
Here a' denotes the number following a, and the integers 1, 2, 3, . . . can be written in the form 0', 0, 0', . . . .

He then proposes "need[ed] explicit definitions, whitch introduce the notions of mathematics and have the character of axioms, as well as certain recursion axioms, which result form a general recursion schema". He defines variables, and substitution of logical formulas into "propositional variables". Then in particular,

"The recursion associated with the number-theoretic variable n is defined when we indicate what value it has for n = 0 and how the value for n' is obtained from that for n. The generalization of ordinary recursion is transfinite recursion; it rests upon the general priniciple that the value of the function for a value of the variable is determined by means of the preceding values of the function."(van Heijenoort 1967:468. This is repeated almost verbatim at Hilbert 1925 On the Infinite in van Heijenoort 1967:386).

He continues by defining functions of functions, contemporary usage is "composition": [This is messy, see footnote p. 385 re

Φ(f) ~ (a)Z(a) --> Z(f(a))); here Z(a) is an variable [atomic] proposition, a is a natural number, f(a) is a function with natural number a as argument
Ψ(g) ~ (f)(Φ(f) --> Z(g(f)))
"we can now characterize what is to be understood by explicit definitions and by recursion axioms . . . the recursion axioms are formula systems that are modeled upon the recursive procedure. (all quotes from Hilbert (1927) the Foundations of Mathematics in van Heijenoort 1967:467-469)

--- cc'd from CBM's talk page:

The finitary Hilbert and primitive recursion, a quote I stumbled on this in van Heijenoort, thought you might like to see it: In his 1925 On the Infinite Hilbert is discussing the notion of recursion:

"Clearly, the elementary means that we have at our disposal for forming functions are substitution (that is, replacement of an argument by a new variable or function) and recursion (according to the schema of the derivation of the function value for n+1 from that for n). ¶ One might think that these two processes, substitution and recursion, would have to be supplemented with other elementary methods of definition . . . ¶ It turns out, however, that any such definition can be represented as a special case of the use of substitutions and recursions. The method of search for the recursions required is in essence equivalent to that reflection by which one recognizes that the procedure used for the given definition is finitary." (boldface added Hilbert (1925) On the Infinite in van Heijenoort 1967:385-386)

But then, in the following paragraphs Hilbert discusses other "recursions [that] would not be ordinary, stepwise ones . . . that is, a recursion on different variables at once" and ends with the hypothesis that "the function φa(a, a) is an instance of a function, of the number-theoretic variable a that cannot be defined by substitutions and ordinary, step-wise recursions alone if we admit only number-theoretic variables.9 (9 This assertion was proved by W. Ackermann [ [1928] ])." (loc. cit. page 386).

My interpretation of this is that you were correct, and Hilbert himself admits to it. Bill Wvbailey (talk) 19:55, 30 August 2009 (UTC)

There's even more later: "Or, to express ourselves with greater precision and more in the spirit of our finitist attitude, if by adducing a higher recursion or a corresponding variable-type [etc]" (p. 391), and "The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking -- a remarkable harmony between being and thought. We gain a conviction that runs counter to the earlier endeavors of Frege and Dedekind . . . certain intuitive conceptions and insights are indispensable; logic alone does not suffice. The right to operate with the infinite can be secured only by means of the finite. ¶ The role that remains to the infinite is, rather, merely that of an idea . . . through which the concrete is completed as to form a totality." (p. 392)

This is a surprise to me; I didn't realize that Hilbert considered himself to be a finitist. van H. says that "After 1927 Hilbert himself laid the problem aside. The scale of variable-types has not been further investigated" (p. 368). Bill Wvbailey (talk) 20:09, 30 August 2009 (UTC)

One executive summary of Hilbert's program is: find finitistic proofs that the infinitary methods of mathematics are consistent. Our article on Hilbert's program is, unfortunately, very brief. In the context of finitism, the benefit of primitive recursive functions is that it is extremely easy to prove that each one is actually a total function. This is in contrast to total recursive functions, some of which may be very hard to prove total.
For example, consider the function that does the following:
On input n, check whether n itself is the Gödel number of a proof of 0=1 from the axioms of ZFC. If it is not, return 0. Otherwise, go into an infinite loop.
Because ZFC is consistent, this definition yields a total computable function. However, because of Gödel's second incompleteness theorem, ZFC cannot prove that this is a definition of a total function. Such issues cannot arise with primitive recursive functions. — Carl (CBM · talk) 23:37, 30 August 2009 (UTC)

---

Ackermann 1928

Skolem 1925

Shows that modifying the "inducive schema" (van Heijenoort's "schema A of primitive recursion) to include " "course-of-values" recursion does not actually enlarge the class of functions obtained. (van Heijenoort's commentary before Ackermann (1928) On Hilbert's construction of the real numbers.

Goedel 1931

"Schema A is Goedel's schema of primitive recursion (1931; [see van Heijenoort 1967:602ff]), and it is the schema generally adopted today. Ackermann uses another schema, that of Hilbert (1925; [see van Heijenoort 1967:389]),
(1') f(x, 0) = g(x)
(2') f(x, n+1) = h(x, n, f(y, n))

where h is a function obtained by composition from the intial functions (0, the successor function, the identity functions), previously obtained recursive functions, and the function f(y) but not of n; the argument places marked by y do not actually occur in the function f; the argument places marked by y have been ocupied by functions of the kind just prescribed. The equivalence of schemas A and A' was proved by Péter (1934; see also 1932)." [quote modified, simplified: footnote a in van Heijenoort 1967:493]

Note: the notion of a "successor" number is "intuitively derived", or "derived by experience". There are ways of making it "axiomatic" but then other notions intrude, e.g. the "intuitive" notion of "concatenation" and from "concatenation" the notion of "place/space" and "contiguous/besides/to the right [left] of", etc.

Goedel 1931 (p. 602): "A number-theoretic function25 is said to be recursively definied in terms of the number theoretic functions ψ(x1, x2, . . ., xn-1 and μ(x1, x2, . . ., xn+1 if:

φ(0,x2, x2, . . ., xn-1 = ψ(x2, . . ., xn)
φ(k + 1,x2, x2, . . ., xn) = μ(k, φ(k,x2, x2, . . ., xn), x2, . . ., xn)" (van Heijenoort 1967:602)

Thus the "output" function φ has a "base" evaluation when the "index" variable k=0 and thereafter μ for every number k+1, we observe that base- function ψ has one less variable-position than φ and 2 less than μ. Thus the simpler (1-variable) form of this is the following:

φ(0,x) = ψ(x)
φ(k + 1,x) = μ( k, φ(k, x), x )"

Example of addition φ(k, x) = a + b , i.e. , given the notion of "successor" + 1

  • φ(0) = a
  • φ(k + 1) = φ(k) + 1

A historically-accurate rendition of "successor" would be:

  • φ(0) = a
  • φ(k ') = φ(k) '
  • k = 0: φ(0) = a
  • k = 1: φ(1) = φ(0) + 1 = a + 1
  • k = 2: φ(2) = φ(1) + 1 = (a + 1) + 1 = a + 2
  • k = 3: φ(3) = φ(2) + 1 = (a + 2) + 1 = a + 3
  • ...
  • k = b: φ(b) = φ(b-1) + 1 = (a + (b-1)) + 1 = a + b ; note that " -1" is fictitous and at this point there really isn't any notion of -1 + 1 = 0; b-1 is the "predecessor" to b, i.e. (b-1) +1 =def b
  • k = b+1: φ(b+1) = φ(b) + 1 = (a + b) + 1 = a + b + 1
  • [etc]

mu-operators: (These come in two varieties, bounded and unbounded). Anyone familiar with contemporary computation will probably wonder what stops this incrementing-process; there isn't anything; it goes on forever. So we really haven't calculated a + b. Something else is required. The so-called "representing" function χ(b, k) (cf Indicator function) can stop the process, in a manner of speaking. First, we need a way to compare k and b such that if k = b then χ(k, b) = 0, otherwise χ(b, k) = 1. And then we need the Boolean notion of χ*μ: 0*μ = 0, 1*μ = μ. (Both of these notions can be defined from primitive recursion). Given this we modify our recursion as follows:

  • φ(0) = a
  • φ(k + 1) = χ(k, b)*( φ(k) + 1 )
  • k = 0: φ(0) = a
  • k = 1: φ(1) = (IF 1k = b+1 THEN 0 ELSE 1)*( φ(0) + 1 ) = (IF 1k = b THEN 0 ELSE 1)*((a) + 1) = 1*(a + 1) = (a + 1), as long as 0 < b+1.
  • k = 2: φ(2) = (IF 2k = b+1 THEN 0 ELSE 1)*( φ(1) + 1 ) = (IF 2k = b THEN 0 ELSE 1)*((a + 1) + 1) = 1*(a + 2) = (a + 2), as long as b > 1.
  • k = 3: φ(3) = (IF 3k = b+1 THEN 0 ELSE 1)*( φ(2) + 1 ) = (IF 3k = b THEN 0 ELSE 1)*((a + 2) + 1) = 1*(a + 3) = (a + 3), as long as b > 2.
  • ...
  • k = b: φ(b) = (IF bk = b+1 THEN 0 ELSE 1)*( φ(b) + 1 ) = (IF b-1k = b THEN 0 ELSE 1)*((a + (b-1)) + 1) = 1*(a + b) = (a + b), as long as k > b+1.
Note that " -1" is fictitous and at this point there really isn't any notion of -1 + 1 = 0; b-1 is the "predecessor" to b, i.e. (b-1) +1 =def b
  • k = b+1: φ(b+1) = (IF b k = b THEN 0 ELSE 1)*( φ(b-1) + 1 ) = (IF b-1k = b THEN 0 ELSE 1)*((a + (b-1)) + 1) = 1*(a + b) = (a + b), as long as k > b-1.

This is exactly analogous to the counter machine with the following 3 instruction-schema: { CLR, INC, MOV, JE [Jump-if-equal], } { CLR φ, CLR k, INC φ, JE (Ri, Rj): IF (Ri) = (Rj) THEN (instruction xxx) ELSE (instruction yyy) }. Our machine has three "holes": "a" containing the count a, and "b" containing the count b, "k" containing the increasing-count. The output will appear in hole φ:

1 CPY (a, φ) ; COPY the contents of input a-hole to output hole φ
2 CLR k ; empty the contents of "counter-hole" k
3 JE (k,b,3) ;if contents of counter-hole k equals the contents of input b-hole then goto DONE [0] ELSE continue onward
4 INC k ;increment the contents of counter-hole k
5 INC φ ;increment the contents of output-hole φ
6 JE (k,k,3) ;direct jump to instruction #3

Note the cyclical jump.

Another approach is to provide the notion of predecessor , i.e. " - 1". Given this we can "count down" our number b until it hits zero and then use our Boolean notion of χ*μ: 0*μ = 0, 1*μ = μ; thereby we "stop" (freeze is a better word) the output φ.

And this is the 0-variable form:

φ(0) = c; c is a constant
φ(k + 1) = μ( k, φ(k) )"

Example -- here we've predefined "multiplication by a constant" and addition

  • φ(0) = 0
  • φ(k+1) = φ(k)+ c*(1 - φ(k)),
i.e. μ( k, φ(k, x)) = φ(k)+ c*(1 - φ(k))
  • k = 0: φ(0) = 0
  • k = 1: φ(1) = φ(0)+ 0.5*(1 - φ(0)) = φ(0)+ 0.5 + 0.5*φ(0) = 0.5
  • k = 2: φ(2) = φ(1)+ 0.5*(1 - φ(1)) = 0.5 + 0.5*(1 - 0.5) = 0.75
  • k = 3: φ(3) = φ(2)+ 0.5*(1 - φ(2)) = 0.75 + 0.5*(1 - 0.75) = 0.875, [etc]


"A number theoretic function φ is said to be recursive if there is a finite sequence of number-theoretic functions φ1, . . ., φn that ends with φ and has the property that every function φk of the sequence is recursively defined in terms of two of the preceding functions, or results from any of preceding functions by substitution27, or, finally, is a constant or the successor function x + 1. ... A relation R(x1, x2, . . ., xn) between natural numbers is said to be recursive28 if there is a recursive function φ(x1, x1, . . ., xn) such that for all x1, x1, . . ., xn,

R(x1, x1, . . ., xn) ∾ [φ(x1, x1, . . ., xn) = 0].29
25 That is, its [the number theoretic function's] domain of definition is the class of nonnegative integers . . . . and its values are nonnegative integers.
26 In what follows, lower-case italic letters (with or without subscripts) are always variables for nonnegative integers (unless the contrary is expressly noted).
27 More precisely, by substitution of some of the preceding functions at the argument places of one of the preceding functions , for example φk(x1, x1) = φpq(x1, x1), φr(x2), (p, q, r < k). Not all variables on the left side need occur on the right side (the same applies to the recursion schema (2)).
28 We include classes among relations (as one-place relations). Recursive relations R, of course, have the property that for every given n-tuple of numbers it can be decided whether R(x1, x1, . . ., xn) holds or not.

Church-Turing Thesis

In computability theory the Church–Turing thesis (also known as Church's thesis, Church's conjecture and Turing's thesis) is a combined hypothesis ("thesis") about the nature of effectively calculable (computable) functions by recursion (Church's Thesis), by mechanical device equivalent to a Turing machine (Turing's Thesis) or by use of Church's λ-calculus. The three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent by Alonzo Church, Stephen Kleene, J.B. Rosser (1934–6)[1] and Alan Turing (1936–7)[2].

However, even though the three processes proved to be equivalent, the fundamental premise behind the theses—the notion of what it means for a function to be "effectively calculable" (computable) is "a somewhat vague intuitive one"[3]. Thus, as they stand, the theses remain hypotheses and cannot be proven.[3]

Informally the Church–Turing thesis states that if an algorithm (a procedure that terminates) exists then there is an equivalent Turing machine, recursively-definable function, or applicable λ-function, for that algorithm. Today the thesis has near-universal acceptance.

In response, the various notions for "effective calculability/computability" have been proposed as a "formal definition" by Church[4], Turing[5] and Rosser[6], as a "working hypothesis" leading by inductive reasoning to a "natural law" by Post[7] (an idea "sharply" criticized by Church[8]), or as an axiom or set of axioms (suggested by Kurt Gödel to Church in 1934–5[9] but discounted in Post 1936[9]).


  • (A) "heuristic [observational, experiential] evidence" derived from various "symbolic logics and symbolic algorithms"[3]
  • (B) "equivalence of 'diverse formulations'"[10]) and
  • (C) "Turing's concept of a computing machine" -- an observational analysis of a human with a pencil and paper following a set of rules (Turing's analysis[11]) and of a "worker" in boxes (Emil Post's analysis 1936[12][13]).


Church's thesis: "Every effectively calculable function (effectively decidable predicate) is general[14] recursive" (Kleene 1952:300)
Turing's thesis: "Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX." (Kleene 1952:376)
  1. ^ Church 1934:90 footnote in Davis 1952
  2. ^ Turing 1936–7 in Davis 1952:149
  3. ^ a b c Kleene 1952:317 Cite error: The named reference "irmrxd" was defined multiple times with different content (see the help page).
  4. ^ Church 1934 in Davis 1965:100ff
  5. ^ Turing 1939 in Davis 1965:160
  6. ^ Rosser 1939 in Davis 1952:225–6
  7. ^ Post 1936 in Davis 1952:291
  8. ^ Sieg 1997:171 and 176–7)
  9. ^ a b Sieg 1997:160
  10. ^ cf Kleene 1952:319–323
  11. ^ 1936–7 Turing 1936–7 loc cit:135ff
  12. ^ Post 1936 in Davis 1965:289
  13. ^ Kleene 152:321–2
  14. ^ An archaic usage of Kleene et al. to distinguish Gödel's (1931) "rekursiv" (a few years later named primitive recursion by Rózsa Péter (cf Gandy 1994 in Herken 1994–5:68)) from Herbrand–Gödel's recursion of 1934 i.e. primitive recursion equipped with the additional mu operator; nowadays mu-recursion is called, simply, "recursion".