Your cited studies are very revealing...
in their inadequacy to represent the likely voters in our society. Digging into the weeds of your cited studies, which you should have done yourself, but statistical studies can be so boring; why bother doing it? Well, I asked because I do. I am a Certified Six Sigma Black Belt. Look it up. In this regard, I am a professional of significant capability. Where many will accept a survey's results as credible, I am a huge skeptic with reason. Most polls, in particular, virtually all political polls, come fully biased. First is the nature of those polled as a sub-group "representing American voters" in general. The reason why the election results were so different from what political polls were telling us in 2016, was that polls, as do those in your cited studies, asked the wrong kind of people: registered voters. You more accurate sub-group will always be likely voters, a significantly different population. You polls end up citing "most Americans" and you accept as fact that it represents America. Nope. Not even close.
Have a care to know what is meant by "most Americans." Who is polled to arrive at such conclusions? Hint: it's usually registered voters. That, alone, is a faulty beginning because the target ought to be likely voters. The two terms are not synonymous. In 2016, there were 250M registered voters. Only 127M of them actually voted, 51.1%, divided between Trump and Hillary. 123M, 49,9%, did not vote, and had no opinion, yet, they are included in a poll such as you reference. If they didn't care to vote, how valid is their opinion? And there, my friend, is your true majority; people with no opinion, because they refuse to back it by voting for either candidate. Now, shall we discuss what "margin of error" means? It's another reason why 2016, and virtually all political polls, are flawed, and inaccurate.
Margin of error is the calculation of how many percentage points your results will differ from the real population value, expressed as a plus/minus percentage. For example: coupled with a standard 95% confidence interval with a ±3 percent margin of error means that your statistic will be within 6 percentage points of the real population value 95% of the time. Great, except that your "studies," extensive though they were, omitted reveal a margin of error. A margin of error of greater than ±3 percent is to great for general accuracy.
Number of questions: A valid poll will keep the questionnaire to a minimum number of questions to avoid the huge risk of a polling subject's loss of interest, which messes with your margin of error. Your studies had 98 pages of of multiple questions per page. The acceptable poll is limited to about 10 questions, total. Oops.
An equivalent number of poll subjects considering a particular demographic gives the most accurate results. For example, one should have a relatively even number of men and women, Democrats and Republicans, ages groups, racial demographics, education demos, [we'll key on that one aa an example], and likelihood to vote.
Your "studies" had 4x women to men, for example, and the real cogent key: education: less than 10% of the total sub-group population bothered to register that they had completed high school or college. TEN PERCENT! Your "studies" results in this demographic, the absolute purpose of the "studies," turns on this very point. 10% of your "most Americans," half of whom DID NOT VOTE, were the basis of the claim that Trump supporters are less educated than Hillaryous Balloon Girl supporters. Sorry, I do not accept your cited "studies." I'll wager you knew none of the foregoing, or you would have been ashamed to cite it. Know your sources. Having a fair idea about who you're talking to helps, too.