11/6/2022 0 Comments Ampps existing wordpressI read the paper independently of the reviews, both before sending it out and again before reading the reviews (given that it had been a while). Nelson was more positive about the goals of the paper and approach, although he wasn’t entirely convinced by the approach and evidence. Reviewers 1 and 2 were both strongly negative and recommended rejection. Reviewers 1 and 2 chose to remain anonymous and Reviewer 3 is Leif Nelson (signed review). In the end, I received guidance from three expert reviewers whose comments appear at the end of this message. I initially struggled to find reviewers for the paper and I also had to wait for the final review. First, my apologies for the overly long review process. Thank you for submitting your manuscript (AMPPS-17-0114) entitled “Z-Curve: A Method for the Estimating Replicability Based on Test Statistics in Original Studies” to Advances in Methods and Practices in Psychological Science (AMPPS). This is how even meta-scientists operate. The peer-review report can be found on OSF: Īlso, three years after the p-curve authors have been alerted to the fact that their method can provide biased estimates, they have not modified their app or posted a statement that alerts readers to this problem. Meanwhile the article has been published in the journal Meta-Psychology, a journal without fees for authors, open access to articles, and transparent peer-reviews. But, here you can see how traditional publishing works (or doesn’t work). Normally, you would not be able to see the editorial decision letter or know that an author of the inferior p-curve method provided a biased review. However, this blog post shows that AMPPS works like any other traditional, for-profit, behind pay-wall journal. You might think that an article that relies on validated simulations to improve on an existing method, z-curve is better than p-curve, would be published, especially be a journal that was created to improve / advance psychological science. We also show that other methods that are already in use for effect size estimation, like p-curve, produce biased (inflated) estimates. We validated our method with simulation studies. Jerry Brunner and I developed a method to estimate average power of studies while taking selection for significance into account.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |