Unsolved Problems
1) The Conjecture as a gap in the Unifying View and the foundation of Automatic Differentiation
2)
How to establish that (scalar) elementariness of a function is
violated at a certain point
3) A question about the functions having no points of violation of
scalar elementariness
4) Regularization of a polynomial ODE and the Conjecture
5) More suspects for points of violation of elementariness
6) "AD" for singularities: Scenarios of evaluation
of formal derivatives at a singular point of a polynomial morder ODE
Here and further on we deal with holomorphic functions and their derivatives in
the complex plane.
The Conjecture (a brief presentation here (slides 68), in details here) is not the only unsolved problem emerging in the frame of the Unifying View, but it is the central open issue affecting the shape of the Unifying View theory. Below are a few problems topped by the Conjecture proper and including other unsolved problems associated with the Conjecture, all awaiting attention of the mathematical community.
1) The Conjecture as a gap in the Unifying View and the foundation of Automatic Differentiation
In either of the two equivalent forms below, the Conjecture claims that ...
For any component, say x(t), of an explicit system in x(t), y(t), z(t), ... of m 1st order rational ODEs

For any component, say x(t), of an explicit system in x(t), y(t), z(t), ... of m 1st order polynomial ODEs^{(1)}
regular at all points of its phase space, for any initial point a 
there exists an explicit norder rational ODE satisfied by x(t)
x^{(n)} = R(t, x, x', x", ..., x^{(n}^{1}^{)}) (3)

So far neither the proof nor counterexample for this Conjecture is available. However if the Conjecture is weakened by dropping the requirement of regularity in the target ODE (3), there is a proof based on combinatorial considerations (though impractical for obtaining the target ODE) see the slide 13 or Appendix 1 here) .
Latest development (2023). In the end of 2022, I drafted a paper believing that I has resolved the Conjecture making an attempt of its submission. The submission was declined without consideration for technical reasons, however soon I discovered a mistake in the crucial Lemma (prompted by Christer Kiselman). This unpublished manuscript however became available as a preliminary print.
Therefore, as of
this moment (April 2023) here is the stumbling
block on the way of solution of the Conjecture.
___________________
^{(1)} 1st order rational ODEs
regular at a given point are transformable into a wider polynomial
system of ODEs.
Evolution of the concept of elementary functions
First elementary functions were defined by Liouville in the 19th century as a matter of convention: a few well studied vastly used functions and their finite compositions.
In the 1960s Ramon Moore suggested a different approach, replacing the convention about the elementary functions with a definition based on their fundamental property.
Moore defined (vector) elementariness based on a property of an elementary vectorfunction to satisfy a system of explicit rational 1st order ODEs.
A competing definition of standalone or scalar elementariness would be based on a property of an elementary function to satisfy one explicit rational ODE of order n, or one implicit polynomial ODE of order n.
The vector elementariness
by Moore easily follows from the scalar elementariness (because conversion of
an norder ODE into a system of 1st order ODEs is trivial). As to the
vice versa statement, the scalar elementariness by Moore follows from vector
elementariness due to the Theorem proved by me in Appendix 1 here. Therefore the scalar and vector
elementariness by Moore are equivalent.
The Liouvilleelementary functions are also Mooreelementary. Say cos t
is vectorelementary together with sin t because they satisfy the system
x' = y
y' = x .
However each of them is also scalarelementary satisfying the ODE x"
= x (or satisfying an implicit
first order nonlinear ODE (x')^{2} + x^{2}
= 1). In general case however it may be a challenge to obtain an norder
ODE demonstrating the scalar elementariness from the system of ODEs
demonstrating the vectorelementariness.
It was understood indeed that there exist infinitely many ODEs for demonstrating elementariness of a particular (vector) function: some ODEs possibly singular at certain points of their phase space.
It's worth noting that...
The Moore definition applies to the functions...
· In their entire domain of existence, and ...
· With understanding that the rational ODEs may have points of singularity in their phase space unusable as initial points, though sometimes it's possible to replace singular ODEs with the regular ones at the point of interest.
Then in 2008 it was discovered that there exist isolated points in some holomorphic scalarelementary functions whose Mooreelementariness at such points may be demonstrated only with singular ODEs. For example such is the function
x(t) = 
e^{t} 1 
t 
x(0) = 1
considered in details below. Such kind of special points were earlier known as "regular singularities" or "removable singularities". As it was shown, though the singularity in functions at such point is removable and the functions are holomorphic (with the proper defined value at such point), it's impossible to rid of singularity in the ODEs demonstrating their Moorelike scalarelementariness at such points. This new type of specialty happens to be unremovable (discussed in item 2).
It was namely the discovery of these special points which gave the reason to
refine the Moore definition of elementariness as a property of a function at a neighborhood
of a point (rather than a property in the entire domain of existence).
The refined definitions of elementariness
Definition 1: a vectorfunction (x, y, z, ...) is
called vectorelementary at and near a point t=a if
it satisfies a system of rational ODEs (1) regular at the point t=a, or
a polynomial system (2).
Definition 2: a function x(t) is
called scalarelementary at and near a point t=a if
it satisfies a rational ODE (3) regular at the point t=a, or if it satisfies an implicit polynomial ODE (4) regular at a
meaning ∂P/∂X_{m}_{t=a} ≠ 0 (where X_{m} denotes
x^{(m)}).
The vector elementariness easily
follows from the scalar elementariness (because conversion of an norder
ODE into a system of 1st order ODEs is trivial). As to the vice versa
statement, the scalar elementariness would follow from the vector
elementariness if the Conjecture is true. Therefore equivalence between these
two definitions depends on the Conjecture.
Most properties and results in the "Unifying View" are proved
only for vectorelementariness, and it is not known whether they are
true for scalar elementariness. In particular, the fundamental property
of closeness of the class of vectorelementary functions states that a
composition of such vectorfunctions, or the inverse vectorfunction preserves
vectorelementariness. (For the scalar elementariness this is not known).
2) How to establish that (scalar) elementariness of a function is violated at a certain point?
Until recently the only function proved to be nonelementary (at all points of its domain!) was the Euler's Gamma function thanks to the Hölder theorem utilizing the special property of the Gamma function to satisfy the finite difference equation Γ(x+1) = xΓ(x).
Our method of proof that scalar elementariness of a function is violated at a
certain point utilizes the specific of the sequence of derivatives at this
point, for example the specific of x^{(n)}_{t}_{=0}
= 1/(n+1) (see the article
).
Any holomorphic function x(t) is defined entirely by its analytic element, i.e. by the sequence of derivatives x^{(n)}_{t=a} at a point. It appears that the scalar elementariness at a point a also depends on the property of the sequence of its derivatives x^{(n)}_{t=a} . For the type of functions and examples considered in the article, the sequences of derivatives are rational numbers x^{(n)}_{t=a} =r_{n }.
The method of proof is based on considering polynomial ODEs P(t, X, X_{1}, , X_{m})=0 over integer coefficients at a regular point where ∂P/∂X_{m} =c ≠ 0. By applying differentiation, we may obtain an infinite sequence of polynomial ODEs P_{k}(t, X, X_{1}, , X_{m}, , X_{m+k})=0 and then to establish that for all leading derivatives ∂P/∂X_{m+k} =c ≠ 0.
As a first step, the study was made for a particular function
x(t)_{t=0} = 
e^{t} 1 
t 
where all derivatives exist and x^{(n)}_{t}_{=0}= 1/(n+1). The idea of the proof was in that while k grows to infinity, the denominator 1/(n+k) passes through all prime numbers. Therefore, for a big enough prime n+k, the fraction 1/(n+k) in the equation P_{k} =0 cannot cancel with other fractions, leading to a contradiction. That way it was proven that x(t) at t=0 can satisfy no regular polynomial ODE, meaning that its scalar elementariness is violated at t=0 followed merely from the fact that the sequence of derivatives x^{(n)}_{t=a }are fractions (in lower terms) with growing denominators!
This observation enabled uncovering a wide class of functions whose sequence of derivatives is also represented by fractions with infinitely growing denominator (see Table 1 in the article), and here are examples of the respective functions all having t=0 as a point of violation of scalar elementariness:

e^{t} 1 

sin t 

ln(t + 1) 

cos t^{½} 
t 
t 
t 
It's notable that (sin t)/t at t=0 represents the "remarkable limit" (studied in the foundation of the calculus). It's new meaning is that it is a point of violation of the scalar elementariness of the respective entire function. The function (ln (1+t))/t is also closely related to the "remarkable limit" (1+t)^{1/t} (see section 4).
And here comes an unsolved part. There are functions suspects for having
points of violation of elementariness, whose sequence of fractions representing
the derivatives at the point in question does not fall into the above
pattern of fractions with denominator passing throw all prime numbers. For
example, the family of Bessel functions J(t) satisfies the known ODE
t^{2}x" +
tx'+ (t^{2} p^{2})x=0
singular at t=0, and it is not known whether it satisfies a
regular ODE at t=0. For every function of the Bessel family J_{p}
the expansion at t=0 is known and its general term is
J_{p}^{(n)} = 
(1)^{k}C^{k}_{p+2k} 
2^{p+2k} 
for n=p+2k, k=0, 1, 2, , and zero otherwise. However it's impossible to draw a controversy from these fractions following the method of proof above (the denominators are not growing primes).
Do the Bessel functions have violation of scalar elementariness at t=0?
What about points of violation of vector elementariness?
Another unsolved part of the problem is whether the examples of functions above having t=0 as a point of violation of their scalar elementariness violate also their vector elementariness at that point (the answer depending on the equivalency of the two definitions, or on the Conjecture).
3) A question about the functions having no
points of violation
of scalar elementariness
At point t=0 all the functions
above can satisfy no regular rational ODE and consequently no explicit
polynomial ODE; but which kind of ODEs can they satisfy at points other
than t=0?
Indeed, at points other than t=0 they do satisfy rational
ODEs (or implicit polynomial ODEs). As to explicit polynomial
ODEs, here is the
Theorem 1. If a function x(t) has a point of violation of its scalar elementariness, it can satisfy no explicit polynomial ODE
x^{(n)} = P(t, x, x', x", ..., x^{(n}^{1}^{)}) (5)
at either point of its phase space.
The equivalent form of this Theorem is...
Theorem 2. If a function x(t) satisfies an explicit polynomial ODE (5), x(t) is scalar elementary in its entire domain of existence (i.e. it has no points of violation of its scalar elementariness).
However the vice versa statement is an open Proposition.
Proposition. If a function x(t) is scalar elementary in its entire domain, it must satisfy some explicit polynomial ODE (5) rather than a rational ODE (3).
If a functions x(t) is scalar elementary in the entire domain of its
existence, it means that the denominator g(t, x, x', x",...) in R=f/g
of the rational ODE (3) must never disappear on x(t) in its entire
domain. This seemingly suggests that there must be no denominator at all as
the image of complex functions usually is the entire complex space including 0
(unless the function is a constant), so that the Proposition seems true. In
reality however...
Example. Observing that for the function x(t) = e^{t} the lacunar value is zero and x = x' = x", the following ODE takes place
x" = (x')^{2}/x, x(0)=x′(0)=1
and its denominator never disappears on x(t). Though in this trivial
example we do know an alternative simple linear ODE satisfied by x(t)
(in accordance with what the Proposition states), for an arbitrary x(t) we
do not know whether the Proposition is true.
4) Regularization of a polynomial ODE and the Conjecture
First consider these examples.
Example 1. The (entire) function x(t) = te^{t}
satisfies the ODE
tx' tx x = 0
(6)
singular at t=0. However x(t)
satisfies also the ODE
x" 2x' + x = 0
regular at t=0.
Example 2. The (entire) function x(t) = (e^{t} 1)/t satisfies the ODE
tx'
tx + x 1 = 0
(7)
singular at t=0. However it's proven that
there can not exist a polynomial ODE regular at t=0 satisfied by this
function. Nor can it satisfy a polynomial ODE with a constant nonzero
factor at the leading derivative because unlike in the Example 1, here
function x(t) does have a point of violation of its scalar
elementariness.
Example 3. Besides the ODE (7), the function x(t) = (e^{t} 1)/t also satisfies the ODE
x"(x 1)(t 1) (tx' 2x' +
x)(x' 1)= 0
singular at t=1 (with any initial
value for x, in particular with x(1) = e 1
corresponding to x(t) in Example 2). At that, this x(t) satisfies
also the ODE (7) regular at t=1 (though singular at t=0).
If we set a goal to rid of singularity at t=1 in the Example 3, ODE
(7) is the answer. (The ODE in Example 3 was obtained in a process of an
intentional "planting" of a singularity into the ODE (7) at t=1
described here).
Now comes a question.
Let a holomorphic function x(t)
satisfy a polynomial ODE over complex coefficients
P(t, x, x', ..., x^{(m)}) = 0
(8)
and P=0 happened to be singular at t=0.
Does a method exist which either offers another poly ODE
Q(t, x, x', ..., x^{(n)}) = 0
satisfied by x(t) but regular at t=0,
or it says that x(t) cannot satisfy any poly ODE (8) regular at t=0.
Here is how this question relates to the Conjecture.
If the Conjecture is true and proved, then the answer to the question is this:
If a function x(t) at t=0 satisfies some polynomial system (2), then a regular replacement Q exists. And if function x(t) at t=0 cannot satisfy the polynomial system (2), then a regular replacement Q does not exist.
5) More suspects for points of violation of elementariness
More suspects for points of violation of
elementariness appear in quite unexpected situations. For example, a
transcendental function y(x) which is a nontrivial solution of the
equation of commuting powers x^{y}=y^{x} also satisfies an ODE which is singular at the
point x=y=e:
y"x^{2}y^{2}(y x) (y')^{3}x^{4} + (y')^{2}x^{2}y(3x 2y) + y'xy^{2}(3y 2x) y^{4} = 0
and it is not known whether y(x) satisfies a regular ODE at the point x=y=e.
It is proven that y(x) is holomorphic at x=y=e, however no
general term in closed form is known for its expansion (its derivatives at this
point are obtainable only via a complicated recurrent equations) see the
article.
Does the y(x) have violation of scalar elementariness at x=y=e?
Being singular, at this point this ODE has more
than one solution being satisfied also by the trivial solution the identity
function y=x . If it were proven that any ODE satisfied by the
nontrivial solution y(x) is satisfied also by the trivial one (thus
having more than one solution), that would prove that x=y=e is really
the point of violation of the scalar elementariness of the nontrivial y(x).
However this is not proved. Note, that the ODE y'=1 satisfied by the
trivial solution is not satisfied by the nontrivial one.
Amazingly, a similar open question exists concerning the equation of the associating powers x^(y^z) = (x^y)^z. It's solution y = z^{1/(z 1)}, y(1) = e is proved to be holomorphic at the point z = 1. However it's not known if this is a point of violation of scalar elementariness.
There is another unsolved problem associated with the commuting powers: the function y(x) happens to be closely approximated by a hyperbola (in red)
h(x) = 1 + 
(e 1)^{2} 
x 1 
and it was proved that h(x) ≤ y(x) in a small neighborhood of the point x=e. (The proof for all x>1 was recently obtained by Lajos Lóczi August 2020).
6) "Automatic Differentiation" for singular epxressions
or
scenarios of evaluation of formal derivatives at a singular point of a
polynomial morder ODE.
As we saw above, a function f(t) holomorphic at some point t=t_{0} however may be defined with an expression having a removable singularity at this point so that f(t_{0}) must be specially defined as the lim f(t) when t→t_{0} . Moreover, at certain points (where elementariness is lost) such functions can be defined only via singular rational expressions or ODEs.
Thus, we have holomorphic functions at a point, having all their derivatives,
but the conventional AD is inapplicable for computing those derivatives.
Therefore we developed a process (as a generalization of AD) capable to compute
norder derivatives of singular expressions or singular ODEs.
Understandably, the AD generalized for singularities, is more complicate and
less suitable for automatization. That is why the expression "Automatic
Differentiation" is taken in quotes in the title. Unlike at a regular point,
evaluation of ODEs at the singular points admits a variety of scenarios, which
I first outlined in 2009, here in section 8.2.
All cases when an elementary function in question x(t) is defined via
elementary systems of ODEs or expressions containing singularities may be
reduced to one implicit polynomial ODE P(x^{(m)}
, x', x, t)
= 0 singular at some point t = a, (i.e. ∂P/∂X_{m}
= 0 at t = a). The
solution x(t) may or may not exist at this point, or may be not unique.
At a singular point we speak about a process of formal evaluation of the
derivatives, and this process may have different outcomes outlined below.
The formal evaluation means applying the rules of differentiation to the
source ODE
not even expecting that the solution exists and has complex derivatives.
This process creates an infinite generating system for obtaining a vector of
coefficients of the formal derivatives x^{(m+}^{1)},
x^{(m+}^{2)},
. Both in regular and singular
cases the generating system is a triangular system.
In the regular case the generating system is linear, delivering a unique
solutionvector for the formal derivatives representing the unique holomorphic
solution x(t) and the vector of its true derivatives (x, x',
x",
, x^{(m)}, x^{(m+}^{1)},
).
In the singular case the scenario becomes much more complicated as displayed in
the table below. If the process for evaluation of formal derivatives does not
stop, the result of the generation is a finite or infinite set of solution
vectors S ={(x, x', x",
, x^{(m)}, x^{(m+}^{1)}
,
)} comprised of formal derivatives allowing to write down the
respective formal Taylor expansions (having zero or nonzero convergence
radius). This set S has a sophisticated treelike structure showed in
the following Table 2.
At a regular point... 
At a singular point... 
The unique holomorphic solution x(t) exists, and its Taylor coefficients comprise the unique solutionvector of the generating system. 
A holomorphic solution of the ODE does not necessarily exist. 
The generating system has a unique solutionvector representing the derivatives x^{(n)} of the solution x(t). 
The generating system may have no solutionvectors, or have more than one, or infinitely many of them. 
The generating system is triangular whose each equation is linear in one unknown with the same nonzero coefficient at every unknown. 
The generating system is triangular but not necessarily linear as each of its equations in one unknown is polynomial with numeric or parametric coefficients. 

Some of the equations may degenerate into a numeric relation true or false. If nth numeric relation is false, the respective branch of the tree ends (marked by stop). If nth numeric relation is true (i.e. a zero identity), the respective unknown is a free parameter (like c^{(m+}^{2)}). 
If one or several unknowns in the previous equations are free parameters, the subsequent equations become nonlinear polynomial equations with parametric (rather than numeric) coefficients, so that generally it gets impossible to numerically explore and evaluate them. 

If the set of the solutionvectors of the generating system is nonempty, some or all of the vectors still may comprise a divergent Taylor series (with zero radius) representing no solution. 
Table 1
This table below illustrates an example of evaluation of a polynomial ODE at a
singular point.

The values of the formal derivatives obtainable from the equations of the generating system 

x 
x, x', x",
.... , x^{(m)} are given. 

x' 

x" 



x^{(m)} 

x^{(m+}^{1)} 
a_{1}^{(m+}^{1)}
The equation in one unknown x^{(m+}^{2) }happened to be of degree 2 having 2 roots a_{1}^{(m+}^{2)}, a_{2}^{(m+}^{2)} 
a_{2}^{(m+}^{1)}
The equation in one unknown x^{(m+}^{2) }degenerated into an identity so that x^{(m+}^{2) } may be an arbitrary complex number c^{(m+}^{2)} 
a_{3}^{(m+}^{1)} 

x^{(m+}^{2)} 
a_{1}^{(m+}^{2) }The equation in one unknown x^{(m+}^{3) }degenerated into an ineqaulity 
a_{2}^{(m+}^{2)} The equation in one unknown x^{(m+}^{3) }happened to be of degree 3

c^{(m+}^{2)} The equation in one unknown x^{(m+}^{3) }and all similar equations further down contain an arbitrary parameter c^{(m+}^{2)}, so that their roots depend on this parameter too the case of uncountable infinite branching. 
a_{3}^{(m+}^{2)} 
a_{4}^{(m+}^{2)} 

x^{(m+}^{3)} 
stop

a_{1}^{(m+}^{3)} 
a_{2}^{(m+}^{3)} 
a_{3}^{(m+}^{3)} 

a_{4}^{(m+}^{3)}(c^{(m+}^{2)}) 
a_{5}^{(m+}^{3)}(c^{(m+}^{2)}) 
a_{6}^{(m+3)}(c^{(m+}^{2)}) 


a_{7}^{(m+}^{3)} 
a_{8}^{(m+}^{3)} 
a_{9}^{(m+}^{3)} 
a_{10}^{(m+}^{3)} 

... 













Table 2
Here x, x', , x^{(m)} are given as the initial values, while x^{(m+}^{1)}, x^{(m+}^{2)}, are to be obtained in the process of solution of algebraic equations in one unknown of some degree k ≥ 0. If k=0 the respective equation degenerated into a numeric relation true or false. If false, this branch ended with a controversy. If true, this equation does not yield a specific solution value allowing the respective x^{(n) }be an arbitrary constant (like c^{(m+}^{2)}). Therefore, the set of branches growing from every node of the tree is either
I have illustrative examples for many (though not all) cases of this evaluation scenario.
Where (in which textbook or monograph) is this setting and scenario considered?