"At most logO(1)nlogO(1)n" means that there is a constant cc such that what is being measured is O(logcn)O(logcn).
In a more general context, f(n)∈logO(1)nf(n)∈logO(1)n is equivalent to the statement that there exists (possibly negative) constants aa and bb such that f(n)∈O(logan)f(n)∈O(logan) and f(n)∈Ω(logbn)f(n)∈Ω(logbn).
It is easy to overlook the Ω(logbn)Ω(logbn) lower bound. In a setting where that would matter (which would be very uncommon if you're exclusively interested in studying asymptotic growth), you shouldn't have complete confidence that the author actually meant the lower bound, and would have to rely on the context to make sure.
The literal meaning of the notation logO(1)nlogO(1)n is doing arithmetic on the family of functions, resulting in the family of all functions logg(n)nlogg(n)n, where g(n)∈O(1)g(n)∈O(1). This works in pretty much the same as how multiplying O(g(n))O(g(n)) by h(n)h(n) results in O(g(n)h(n))O(g(n)h(n)), except that you get a result that isn't expressed so simply.
Since the details of the lower bound are in probably unfamiliar territory, it's worth looking at some counterexamples. Recall that any g(n)∈O(1)g(n)∈O(1) is bounded in magnitude; that there is a constant cc such that for all sufficiently large nn, |g(n)|<c|g(n)|<c.
When looking at asymptotic growth, usually only the upper bound g(n)<cg(n)<c matters, since, e.g., you already know the function is positive. However, in full generality you have to pay attention to the lower bound g(n)>−cg(n)>−c.
This means, contrary to more typical uses of big-oh notation, functions that decrease too rapidly can fail to be in logO(1)nlogO(1)n; for example,
1n=log−(logn)/(loglogn)n∉logO(1)n
1n=log−(logn)/(loglogn)n∉logO(1)n
because
−lognloglogn∉O(1)−lognloglogn∉O(1)
The exponent here grows in magnitude too rapidly to be bounded by
O(1)O(1).
A counterexample of a somewhat different sort is that −1∉logO(1)n−1∉logO(1)n.