The R17 brain theory implies that virtually all of the basic GL tenets stated during the last decades are completely wrong.
One of the tenets of generative linguistics is that one of the things (or the thing) unique to language is recursion. Recursion means that there is an action operating on some input, and during its execution the same action is activated on some other input (usually derived from the original input). The execution of actions can obviously involve the execution of other actions; the point in recursion is that it is the same action that is being activated (with different inputs, otherwise we get into an infinite loop).
Some people view the role of recursion in language as supporting an unlimited number of embedded clauses, for example in sentences such as `X said that Y said that Z said that...'. However, people very quickly (after two or three recursion levels) lose the meaning of such sentences; understanding deep recursive sentences involves a conscious mental effort and usually cannot be done without slow rehearsal. After just a few recursion levels, the only way to understand such sentences is to write them down, showing that the brain finds it particularly hard to implement recursion. This is a simple result of the Q process (the execution process presented by the R17 brain theory), because recursion requires the same node to maintain an arbitrary number of different open goals, which is exactly the situation that imposes strict limitations on the number of items in working memory (see WM section in the R17 paper). Indeed, converging evidence supports the proposal that automated language use (syntax, semantics) relies on a specialized WM-like capacity [Caplan and Waters, 1999].
In a more general view, language is said to use recursion as part of its capacity to generate a practically infinite number of different sentences. In this view, the highest level operation that drives the generation of the next word is recursive because it is repeatedly invoked an arbitrary number of times. Here, the brain is viewed as a black box whose external behavior is recursive. This is not recursion in the computer science sense, because the driving operation does not terminate when it finishes dealing with the input to return to another instance of itself. Moreover, this capacity is not unique to language. The quasi-infinite capacity of language is provided by the brain's ordinary sequence generation process (the Q process), along with the large number of words.
There is solid experimental support for the argument above from the `missing VP' paradigm. Consider the following two sentences. Which one seems more natural immediately during listening (i.e., before re-thinking about their meanings)?
"The patient who the nurse who the clinic hired met jack".
"The patient who the nurse who the clinic hired admitted met jack".
It has been repeatedly shown (e.g., in English [Gibson and Thomas, 1999] and French [Gimenes et al., 2009]) that (1) sounds more natural than (2), even though it is syntactically incorrect. Syntactic predictions for the 2nd verb in (2) (`admitted') are created by the syntax node triggered by `the nurse'. The phrase `who the nurse' is followed by `who the clinic', which, since English does not use two consecutive `who' patterns, is represented using the same syntactic action nodes. Hence, this is a case where recursion should be used if it is used at all. However, the appearance of `admitted' sounds strange, and a plausible account for this is that the predictions (goals) for it have been overwritten after `who the clinic' was complemented by `hired'. Sentence (1) sounds natural because all words are from the same general semantic domain (so create anticipations that conform to the input words), while no syntax nodes are left with unanswered goals.
Note that the above does not imply that languages cannot use such patterns. It only implies that they cannot use recursion (i.e., the same node) to represent them. Indeed, embedding a clause within another is very common, and this is very simple to implement by each level being suppoted by different syntax nodes. A sequence of three consecutive main (non-auxiliary) verbs is not used in English, but it exists in Dutch and German, and the experiment above indeed yields less convincing results in these languages [Frank et al., 2015].