It sounds like you tried to fill out the survey well, but not everyone does so. For example, the article at https://www.insidehighered.com/news/2009/08/19/reputation-without-rigor+&cd=2&hl=en&ct=clnk&gl=us mentions the University of Wisconsin’s Peer Assessment ratings of 262 “national universities” were as follows.
5 = “Distinguished” – University of Wisconsin (school he works for), New School of NYC (school his son attends)
…
2 = “Adequate” – 259 schools including Harvard, Yale, MIT, etc.
1 = “Marginal” – Arizona State (he said AZ state was hit hard by the economy)
The administrator filling out the survey said that he was only especially knowledgeable about University of Wisconsin, New School of Florida, and Arizona state, so those were the only 3 he rated differently than his default of “adequate.” He is quoted as saying, “The problem with an overall ranking of colleges is that without set criteria, you don’t know what it means,” There have been other administrators who claim they were instructed to mark competitors low in an effort to boost their college’s ranking over competition.
However, even when administrators attempt to fill out the survey fairly, how many of the hundreds of colleges on the survey do they know well enough to give a fair rating? Even if they are familiar with a college, what does a rating of 5 = “distinguished” even mean? I expect using vague language like this means different things to different people filling out the form, resulting in inconsistent ratings among different knowledgeable persons.
In theory, you could look at the historical peer assessment ratings and see which colleges are on the rise or decline, However, almost nobody does this since you have to pay $40 to see the ratings, and after paying the $40 you can only see the current ratings, not the previous ratings to see how they have changed.