The short story: PLoS ONE just published our new paper Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics, in collaboration with Manuel López-Ibáñez. The paper discusses known benefits of multi-objective evolutionary algorithms in the context of the evolutionary robotics domain, and is supported by novel experimental results exploiting three benchmarking problems.
The long story: this has been the most painful publications of my career to date, under many aspects. On the one hand, this was a side-project with respect to my other activities, and I could dedicate to this only a very little part of my time. As a matter of fact, I started with the experiments back in June 2011 and finished only on April 2013. Nearly two years in which I could work on this only few hours a week, but I insisted as the I believed (and still do) that the subject is very relevant for the ER community.
On the other hand, this has been a prototypical example of how peer-review should not work: I experienced extremely long delays, subjective judgments, unprofessional behaviour. As a matter of fact, more than two years went past since the first submission. Here’s the detailed chronology:
- IEEE Transactions on Evolutionary Computation: this seemed the natural choice for a paper that discusses the application of MOEAs to robotics.
- June 2013: First submission
- December 2013 (+5M): Reject. The paper “seems not to be appropriate for IEEE TEVC” (sic!)
- IEEE Transaction on Cybernetics. After the first rejection, we decide to go for a “journal dealing with robotics-related topics”, as suggested by the editor of IEEE-TEVC. After evaluating several options, IEEE-TCYB seemed to be the natural choice as our work completely fits within the scope of the journal.
- January 2014: First submission
- March 2014 (+2M): Major revisions. We revise the paper according to the comments. Two reviewers give useful comments that lead to sensible improvements of the introductory sections. A third reviewer returns three lines with some minor requests that we address in the revision.
- April 2014: Submission of revised manuscript (R1)
- May 2014 (+1M): Major revisions. The first two reviewers are satisfied with our revision, the third one changes his judgment to ‘reject’ and returns us basically the same three lines of the first review. We decide to ask clarifications to the editor, as we had no feedback on our revision, but we got no response. We decide to go on with the normal process and we submit a revision in which mainly typos are corrected on the very last available day, and we ask clarifications to the editor in the cover letter and to the reviewer in our response letter.
- June 2014: Submission of revised manuscript (R2)
- October 2014 (+4M): Reject. The third reviewer copy-pastes his previous revision. No response to our requests from the editor or the opponent reviewer. The other two reviewers were still very satisfied with our paper (judged ‘excellent’).
- PLoS ONE. After this troubled rejection on subjective judgments, we want our paper assessed for its technical merits first, and we chose a general-purpose journal like PLoS ONE as we believe strongly in its commitment to publish any technically-sound study.
- October 2014: First submission
- February 2015 (+4M): Major revisions. The paper is technically sound, but the reviewers ask to rework the introductory sections to account for additional literature and to review our discussion in the light of other (recently) published papers. We decide to address all requests, even at the cost of slightly changing and downgrading our initial claims.
- April 2015: Submission of revised manuscript (R1)
- June 2015: Minor revisions. One reviewer asks to reduce the scope of our claims, and requests to change the title. We apply all the cosmetic changes that were requested, but at the same time we try to give new importance to our initial claims.
- July 2015: Submission of revised manuscript (R2)
- August 2015: Accepted!.
After all, I should acknowledge that some reviewers have provided really insightful comments, the paper gained a lot from this and I learned a lot in making the various revisions. However, the whole reviewing process was flawed and much influenced by subjective judgments that had nothing to do with the experimental study (we did pretty no change in those sections since the first submission), and by very little attention paid by the journal editors to the content of the paper. Long story short, without attention and control, the traditional peer-review process reduces to a useless loss of time.