Confluence Competition

Results of Past Competitions

Winners

categories TRS CTRS CPF HRS GCR UN
CoCo 2018 CSI ConCon CSI+CeTA none* AGCP ACP
CoCo 2017 CSI ConCon ConCon+CeTA CSI^ho AGCP CSI

categories TRS CTRS CPF-TRS CPF-CTRS HRS
CoCo 2016 CSI ConCon CSI+CeTA ConCon+CeTA CSI^ho
CoCo 2015 ACP and CSI ConCon CSI+CeTA CSI^ho
CoCo 2014 ACP ConCon CeTA
CoCo 2013 ACP CeTA
CoCo 2012 ACP CeTA

(*) The winner in the HRS category is not decided in CoCo 2018 due to underspecified semantics.

Details

Detailed results (problems, tools, all answers) of the past competitions are here.

Rules for erroneous answers

A tool may sometimes have a bug and may output non-plausible answers such as incorrect answers, answers by incorrect reasoning, or lack of explanation understandable for human experts. If a tool outputs non-plausible answers then the tool can not be a winner of the categories in which those answers are involved. The following list shows what happens if a non-plausible answer is spotted by a human or machine during or after the competition.