Logo



ISSN 2364-3641



Publications:

overview


direct access:

1 (2015)

2 (2015)

3 (2015)

4 (2016)

5 (2016)

6 (2017)

7 (2017)

8 (2018)

9 (2018)

10 (2020)

11 (2020)

12 (2020)

 

vpl-reports.de

Open Access

| home | evaluation | archive | vpl-goettingen.de | impressum |   

The evaluation system used here (updated Jul 15, 2024)

Publications in this journal are not peer-reviewed, at least not with the usual peer-reviewing process in which 2-3 referees evaluate a manuscript before publication. But to give you an idea of what other readers think about the papers and let you yourself become a referee, I introduced a post-publication reviewing system in which all readers can comment on the papers.

Originally, I had also established a quick and anonymous rating system (from 1 poor to 5 excellent). To avoid misuses, I blocked it against repetitive entries from same IP addresses. But this restriction seemed to have challenged certain users who then tried to leave hundreds of repetitive ratings within a very short time. Some of these ratings were blocked (when coming from the same IP address), but others (coming from numerous generated IP addresses) produced fantastic - but wrong - ratings, so that I finally had to remove this "goody" from my websites. My primary interest is in research, not in the development of tricky algorithms to stop readers from trying to be more tricky.

Advantages and disadvantages of rating systems

If you are publishing your own scientific work, you must have been thinking about the advantages and disadvantages of various evaluation systems. In the usual reviewing process, the publication of a new manuscript is based on comments of a few referees from the peer. Among the clear benefits of this system is the discovery and removal of (some) errors and mistakes and often enough the improvement of the clarity of presentation. But the system also has disadvantages. One is a tendency of the peer community to stick to the current ideas and models. Another disadvantage is the limitation to very few people, all experts but probably not experts in exactly the topic of what your manuscript is about. This restriction could be overcome in a post-publication rating system in which all experts from your field may comment on the paper. Provided, of course, they do indeed send their comments.

A serious restriction from the chosen evaluation, however, is that many science indices, pubmed, WebofScience and others, exclusively list papers from peer-reviewed journals. That is a pity and perhaps not quite fair in the context of the growing number of \quot;predatory\quot; journals. Some of these journals seem to collect papers from everywhere to earn publication fees, and it is not always clear that their peer reviewing is indeed made by top scientists in each field. But since these journals are formally \quot;peer reviewed\quot;, their papers should be listed in the science indices. I think the decision about which papers are to be included in these lists should need a revision and the development of other, perhaps more appropriate quality indicators.

 

   © 

 

   ©