At the request of Andrej and PhD, I am turning my comment into an answer, with apologies for self-advertising.
Saya baru-baru menulis sebuah makalah di mana saya melihat bagaimana membuktikan teorema Cook-Levin (N P-Kelengkapan SAT) menggunakan bahasa fungsional (varian dari λ-kalkulus) bukan mesin Turing. Ringkasan:
So the only thing that would change "on this side of the planet" is, perhaps, that we would have considered a λ-calculus-related problem, instead of Boolean-circuit-related problem, to be the "primordial" NP-complete problem.
A side note: the above-mentioned proof could be reformulated in a variant of Accattoli's λ-calculus with linear explicit substitutions mentioned in Andrej's answer, which is perhaps more standard than the λ-calculus I use in my paper.
Later edit: my answer was just a bit more than a cut-and-paste from my comment and I realize that something more should be said concerning the heart of the question which, as I understand it, is: would it be possible to develop the theory of NP-completeness without Turing machines?
I agree with Kaveh's comment: the answer is yes, but perhaps only restrospectively. That is, when it comes to complexity (counting time and space), Turing machines are unbeatable in simplicity, the cost model is self-evident for time and almost self-evident for space. In the λ-calculus, things are far less evident: time cost models as those mentioned by Andrej and given in Harper's book are from the mid-90s, space cost models are still almost non-existing (I am aware of essentially one work published in 2008).
So, of course we can do everything using the purely functional perspective, but it is hard to imagine an alternative universe in which Hartmanis and Stearns define complexity classes using λ-terms and, 30 to 50 years later, people start adapting their work to Turing machines.
Then, as Kaveh points out, there is the "social" aspect: people were convinced that NP-completeness is important because Cook proved that a problem considered to be central in a widely studied field (theorem proving) is NP-complete (or, in more modern terminology, using Karp reductions, coNP-complete). Although the above shows that this may be done in the λ-calculus, maybe it would not be the most immediate thing to do (I have my reserves on this point but let's not make this post too long).
Last but not least, it is worth observing that, even in my work mentioned above, when I show that HO CIRCUIT SAT may be reduced to CIRCUIT SAT, I do not explicitly show a λ-term computing the reduction and prove that it always normalizes in a polynomial number of leftmost reduction steps; I just show that there is an algorithm which, intuitively, may be implemented in polynomial time, just like any complexity theorist would not explicitly build a Turing machine and prove it polytime (which, let me say it, would be even crazier than writing down a λ-term, let alone check for mistakes).
This phenomenon is pervasive in logic and computability theory, complexity theory just inherits it: formal systems and models of computation are often used only to know that one can formalize intuitions; after that, intuition alone is almost always enough (as long as used carefully). So reasoning about the difficulty of solving problems like SAT, TAUT, SUBSET SUM, GI etc., and thus developing the theory of NP-completeness, may largely be done independently of the underlying computational model, as long as a reasonable cost model is found. The λ-calculus, Turing machines or Brainfuck programs, it doesn't really matter if you know that your intuitions are sound. Turing machines gave an immediate, workable answer, and people didn't (and still do not) feel the need to go further.